Tim's Tech blog2022-06-15T18:55:10+00:00https://blog.timmybankers.nlTim Soethoutblog@timmybankers.nlLeverage Domain Knowledge for Faster Distributed Transactions2020-08-03T00:00:00+00:00https://blog.timmybankers.nl/2020/08/03/Path-Sensitive-Atomic-Commit
<blockquote>
<p>Blog post about paper: Path-Sensitive Atomic Commit: Local Coordination Avoidance for Distributed Transactions @ <a href="https://doi.org/10.22152/programming-journal.org/2021/5/3">https://doi.org/10.22152/programming-journal.org/2021/5/3</a></p>
</blockquote>
<blockquote>
<p>Cross-posted on <a href="https://medium.com/ing-blog/leverage-domain-knowledge-for-faster-distributed-transactions-f9d0b2c266fd">ING’s Tech Blog</a>.</p>
</blockquote>
<!-- > Motivation: Abstractions, such as DSLs, can help business users to grasp IT trade-offs between performance and functional requirements. Automatically generating the implementation, picking the best-performing implementation, helps achieving this goal. -->
<!-- > TLDR: There is ample opportunity for cleverly leveraging high-level models when generating code for better performance, scalability and specialized synchronization. -->
<p>TLDR: Safely optimize distributed transactions by leveraging high-level domain-specific models.</p>
<p>Many tools and libraries in software try to make the work of engineers easier: to speed up development, but also to close the gap between IT and business.
These tools provide abstractions that focus on writing business logic.
Within ING Bank this is no different. We use and create tools and abstractions that are closer to the business and abstract away implementation details: <a href="https://medium.com/ing-blog/baker-a-microservice-orchestration-library-e2d162be3d71">Baker</a>, <a href="https://medium.com/ing-blog/cucumber-ing-making-it-part-of-the-agile-workflow-4b53926fbd6">Cucumber</a>, <a href="https://medium.com/ing-blog/micro-front-end-architecture-rapid-development-in-a-startup-environment-10270dca1d5b">front-end libraries</a> and <a href="https://github.com/ing-bank/lion">components</a>, <a href="https://github.com/ing-bank/scruid">query creators</a>, <a href="https://github.com/ing-bank/zkkrypto">cryptography primitives</a>, <a href="https://github.com/ing-bank/rokku">security layers</a>, our internal API SDK, and a lot more.</p>
<p>The premise of this blog post is no different. We want to describe high-level business logic without being bothered by low-level implementation details. However, creating a performant implementation of said logic is non-trivial.
This blog describes an approach on how this high-level domain knowledge encoded in a model can be used to optimize distributed transactions.
This even gives us an advantage over general purpose transaction mechanisms that cannot depend on this extra domain knowledge and can be used for optimizing transactions between micro-services.</p>
<!-- This effectively splits up developers in two categories, tool users and tool developers. -->
<!-- I spend a lot of time the last years devising an algorithm that leverages semantically rich models of objects to speed up coordination between objects: Path-Sensitive Atomic Commit or Local Coordination Avoidance. -->
<p>In other words: We want to make transactions faster, automatically.
Our algorithm, Path-Sensitive Atomic Commit (PSAC), provides a more performant synchronization implementation for automatically generated implementations. This enables writing of high-level business logic or functional requirements, and letting the algorithm take care of performance at run time.</p>
<p>PSAC’s main idea is to use the explicit domain knowledge to improve concurrency where safely possible, e.g. multiple concurrent withdrawals on a bank account are safe when there is enough balance available for both. However, determining this can be more expensive computationally.</p>
<p>Of course this algorithm’s performance has to be evaluated.
Does it really perform better than a base-line implementation?
Here is a sneak preview of the performance results. PSAC performs up to 1.8 times better than 2PL/2PC in a high-contention scenario.</p>
<p><img src="https://blog.timmybankers.nl/assets/images/psac2pc-sync1000-1.svg" alt="Throughput of 2PL/2PC and PSAC" /></p>
<h2 id="background-distributed-transactions">Background: Distributed Transactions</h2>
<p>Transactions are a mechanism to limit the complexities inherent to concurrent and distributed systems, such as dealing with hardware failure, application crashes, network interruptions, multiple clients writing to same resource, reading of partial updates, and data and race conditions.
ACID transactions are the standard in databases. <a href="https://en.wikipedia.org/wiki/ACID">ACID</a> stands for Atomic, Consistent, Isolated and Durable.</p>
<p><em>Atomic Commit & Two-Phase Commit (2PC):</em>
<a href="https://en.wikipedia.org/wiki/Two-phase_commit_protocol">2PC</a> is a well-studied atomic commitment protocol. Atomic Commit requires that multiple resources agree on an action: all should do it or non should do it. This also hold in case of failure of one of the resources.
Resources in this case can be distributed over multiple server nodes, or can even be different applications (see <a href="https://en.wikipedia.org/wiki/X/Open_XA">XA</a>).</p>
<p>2PC works with a transaction manager and multiple transaction resources.
The manager asks the resources to vote on an action. Only when it receives a vote commit from all, it tells all to globally commit and apply the decision.
<!-- If any resource votes to abort, it globally aborts the transaction. -->
<!-- When a resource votes to commit it promises to durably store and accept a later global commit. This makes sure it continue in case of failure. --></p>
<!-- The biggest drawback of 2PC is blocking behaviour when the transaction manager crashes between receiving votes and globally committing or aborting. Now the resources are left dangling, because they promised to wait on commit on the manager, and can not continue without it. -->
<p><em>Concurrency Control & Two-Phase Locking (2PL)</em>:
<a href="https://en.wikipedia.org/wiki/Two-phase_locking">2PL</a> is a concurrency control mechanism that uses locking to make sure that no concurrent changes are made to a resource.</p>
<p><em>Distributed Transactions</em>:
2PL and 2PC can be combined to implement ACID distributed transactions.
The locks are on the level of the 2PC resources. When a resource has voted, it is considered locked. Only after handling a global commit or abort it is unlocked again. This makes sure no other transactions can change the data in the mean time.</p>
<h2 id="path-sensitive-atomic-commit">Path-sensitive Atomic Commit</h2>
<h3 id="models-in-rebel">Models in <a href="https://github.com/cwi-swat/rebel">Rebel</a></h3>
<p>Let’s first look at an example of such semantically rich models.
We use <a href="https://github.com/cwi-swat/rebel">Rebel</a> (<a href="https://dl.acm.org/doi/10.1145/2998407.2998413">paper</a>), a domain specific language for financial products, based on state machines. The concept of leveraging model knowledge is not limited to Rebel.
Our example:
A bank account system example consisting of money transfers and accounts with balances, which should never go below 0, visualized as state charts:</p>
<p><img src="https://blog.timmybankers.nl/assets/images/progamming-state-charts.svg" alt="Rebel State Charts" /></p>
<p>In the textual representation, we see different classes, with some internal data, representing the account balance and identities. On each of the states events are defined with pre- and postconditions, e.g. <code class="language-plaintext highlighter-rouge">Withdraw</code> is only valid when the account has enough balance available.
The <code class="language-plaintext highlighter-rouge">MoneyTransfer</code> class has a special construct <code class="language-plaintext highlighter-rouge">sync</code> which represents an atomic synchronized event, where money is <code class="language-plaintext highlighter-rouge">Withdraw</code>n from one account and <code class="language-plaintext highlighter-rouge">Deposit</code>ed from another. Either both should happen or none.</p>
<div class="language-scala highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="k">class</span> <span class="nc">Account</span>
<span class="n">accountNumber</span><span class="k">:</span> <span class="kt">Iban</span> <span class="kt">@identity</span>
<span class="n">balance</span><span class="k">:</span> <span class="kt">Money</span>
<span class="n">initial</span> <span class="n">init</span>
<span class="n">on</span> <span class="nc">Open</span><span class="o">(</span><span class="n">initialDeposit</span><span class="k">:</span> <span class="kt">Money</span><span class="o">)</span><span class="k">:</span> <span class="kt">opened</span>
<span class="n">pre</span><span class="k">:</span> <span class="kt">initialDeposit</span> <span class="kt">>=</span> <span class="err">€0</span>
<span class="kt">post:</span> <span class="kt">this.balance</span> <span class="o">=</span><span class="k">=</span> <span class="n">initialDeposit</span>
<span class="n">opened</span>
<span class="n">on</span> <span class="nc">Withdraw</span><span class="o">(</span><span class="n">amount</span><span class="k">:</span> <span class="kt">Money</span><span class="o">)</span><span class="k">:</span> <span class="kt">opened</span>
<span class="n">pre</span><span class="k">:</span> <span class="kt">amount</span> <span class="kt">></span> <span class="err">€0</span><span class="o">,</span> <span class="n">balance</span> <span class="o">-</span> <span class="n">amount</span> <span class="o">>=</span> <span class="err">€</span><span class="mi">0</span>
<span class="n">post</span><span class="k">:</span> <span class="kt">this.balance</span> <span class="o">=</span><span class="k">=</span> <span class="n">balance</span> <span class="o">-</span> <span class="n">amount</span>
<span class="n">on</span> <span class="nc">Deposit</span><span class="o">(</span><span class="n">amount</span><span class="k">:</span> <span class="kt">Money</span><span class="o">)</span><span class="k">:</span> <span class="kt">opened</span>
<span class="n">pre</span><span class="k">:</span> <span class="kt">amount</span> <span class="kt">></span> <span class="err">€0</span>
<span class="kt">post:</span> <span class="kt">this.balance</span> <span class="o">=</span><span class="k">=</span> <span class="n">balance</span> <span class="o">+</span> <span class="n">amount</span>
<span class="n">on</span> <span class="nc">Close</span><span class="o">()</span><span class="k">:</span> <span class="kt">closed</span>
<span class="k">final</span> <span class="n">closed</span>
<span class="k">class</span> <span class="nc">MoneyTransfer</span>
<span class="n">initial</span> <span class="n">init</span>
<span class="n">on</span> <span class="nc">Book</span><span class="o">(</span><span class="n">amount</span><span class="k">:</span> <span class="kt">Money</span><span class="o">,</span> <span class="n">to</span><span class="k">:</span> <span class="kt">Account</span><span class="o">,</span> <span class="n">from</span><span class="k">:</span> <span class="kt">Account</span><span class="o">)</span><span class="k">:</span> <span class="kt">booked</span>
<span class="n">sync</span><span class="k">:</span>
<span class="kt">from.Withdraw</span><span class="o">(</span><span class="kt">amount</span><span class="o">)</span>
<span class="kt">to.Deposit</span><span class="o">(</span><span class="kt">amount</span><span class="o">)</span>
<span class="kt">final</span> <span class="kt">booked</span>
</code></pre></div></div>
<p>We can see how these kinds of models can represent different business logic on a relatively high level.</p>
<h3 id="rebel-with-2pc2pl">Rebel with 2PC/2PL</h3>
<p>If we want to implement these models in a scalable system, we can represent all instances of these objects as 2PC resources. This means that they can be interacted with separately, until synchronization (using <code class="language-plaintext highlighter-rouge">sync</code>) is requested. Locally each resource does 2PL, making sure that data is not changed concurrently, and 2PC is used to coordinate the sync.</p>
<table>
<tbody>
<tr>
<td><img src="https://blog.timmybankers.nl/assets/images/programming-PSAC-2pc.svg" alt="2PL/2PC example" /></td>
<td><img src="https://blog.timmybankers.nl/assets/images/programming-PSAC-psac.svg" alt="PSAC example" /></td>
</tr>
</tbody>
</table>
<p>The illustration above on the left describes what happens for such a resource (Account Entity). Vertically time is represented, and the arrows represent messages sent and received.</p>
<p>First (1) a vote request is received from a 2PC manager, preconditions are checked and the resource is locked. When another event (2) arrives it is delayed. Now when the 2PC manager signals the commit later (3), the event’s effects are applied to the resource’s internal state and the resource is unlocked.
Now the delayed event can start as well.
We see that in this way all events are nicely serialized for this resource and no preconditions are done on possibly invalid (partial) state. Event do have to wait on each other in this case, which can become a problem for busy resources.</p>
<h3 id="psac">PSAC</h3>
<p>When looking at the account model above, the most interesting precondition is <code class="language-plaintext highlighter-rouge">balance - amount >= €0</code> of <code class="language-plaintext highlighter-rouge">Withdraw</code>, denoting that there should be enough balance available for the <code class="language-plaintext highlighter-rouge">Withdraw</code> to be allowed.
2PL only allows a single <code class="language-plaintext highlighter-rouge">Withdraw</code> to be in progress at the same time by locking the Account resource. When we naively allow multiple concurrent <code class="language-plaintext highlighter-rouge">Withdraw</code>s on an account, precondition checks could interleave each other, resulting in a balance below zero.
Enter Path-Sensitive Atomic Commit:</p>
<p>PSAC enables multiple concurrent events in-progress at the same time, resulting in lower latency for individual events, because of no locking and no delaying for events.</p>
<p><em>But how does it keep that safe?:</em>
PSAC makes multiple concurrent <code class="language-plaintext highlighter-rouge">Withdraw</code>s safe by keeping track of all in-progress events. It effectively tracks all possible outcome states of in-progress events, and when a concurrent event arrives, the preconditions can be checked against all outcomes. If preconditions hold in all states, an event can already be accepted for processing (and the 2PC commit vote sent). Same for abort, if the preconditions fail in all states.
If the preconditions hold in some states, but not all, PSAC falls back to 2PL/2PC behavior and delays the events.
For our <code class="language-plaintext highlighter-rouge">Withdraw</code> example, multiple <code class="language-plaintext highlighter-rouge">Withdraw</code>s can be in progress concurrently when there is enough balance available for all.
Other examples such as <code class="language-plaintext highlighter-rouge">Deposit</code>s can also run concurrently, because adding money to an account is always allowed by its preconditions.</p>
<p>The PSAC diagram (above on the right) tries to explain in more detail how this works and represents the internal decisions of above sequence diagrams:</p>
<ol>
<li>The <code class="language-plaintext highlighter-rouge">Withdraw</code> arrives and since there are no events in progress, the preconditions are checked against the account state of €100. Now internally there are two possible outcome states, represented by the arrows: €100, when the <code class="language-plaintext highlighter-rouge">Withdraw</code> is eventually aborted by the transaction manager, and €70 when the <code class="language-plaintext highlighter-rouge">Withdraw</code> is committed. <code class="language-plaintext highlighter-rouge">+</code> and <code class="language-plaintext highlighter-rouge">-</code> respectively representing the global commit and global abort.</li>
<li>Now when another <code class="language-plaintext highlighter-rouge">Withdraw</code> arrives, the possible outcomes tree is split again for the existing possible states.</li>
<li>When a <code class="language-plaintext highlighter-rouge">Withdraw</code> of €60 arrives, it is delayed, because in some of the outcome states its preconditions are valid and in some not.</li>
<li>As -€50 commits, the outcome tree can be pruned, and the leaves where it had aborted are cut off.</li>
<li>Now -€60 can be retried and can be directly rejected (<code class="language-plaintext highlighter-rouge">Fail</code>), because in no possible outcome state its preconditions hold.</li>
<li>When the first event commits the last open branch is pruned as well, and the state can be calculated by applying the postconditions of all events in order of original arrival.</li>
</ol>
<p>This blog post’s goal is to intuitively sketch the algorithm. Please see <a href="https://doi.org/10.22152/programming-journal.org/2021/5/3">the paper</a> for more details.</p>
<h2 id="does-it-perform">Does it perform?</h2>
<p>We implemented code generators for the Rebel specifications to both 2PL/2PC and PSAC on the Akka actor toolkit.</p>
<p>Experiments using the bank account system example above are scaled over an increasing number of nodes (and Cassandra database nodes) on Amazon AWS virtual machines. In this case there are as many money transfers as possible done, picked uniformly from 1000 bank accounts. This artificially increases the contention.
The graph below contains a <a href="https://en.wikipedia.org/wiki/Violin_plot">violin plots</a> that show all captured throughput numbers and a fit through it. The transparent line is the linear scalability upper bound.
In this graph we see both algorithms and their throughput numbers. PSAC outperforms 2PL/2PC, which is explained by the increased concurrency.</p>
<p><img src="https://blog.timmybankers.nl/assets/images/psac2pc-sync1000-amdahl-plot-1.svg" alt="Throughput of 2PL/2PC and PSAC" /></p>
<h2 id="concluding">Concluding</h2>
<p>We thus see that there PSAC performs up to 1.8 times better than 2PL/2PC in this high-contention scenario. This promises good results in other situations with other models. We expect to improve the throughput (and latency) even more when more contention is happening, such as when a bank has to execute a lot of transactions involving a single bank account, e.g. when a tax office pays out benefits to citizens.</p>
<p>This shows that higher level semantically-rich models, such as Rebel, give possibilities in bridging the gap between declarative high-level models and optimized implementations.</p>
<blockquote>
<p>This paper is part of my PhD project in the ongoing collaboration between ING Bank and Centrum Wiskunde & Informatica (CWI) on managing complexity of enterprise software ecosystems.</p>
</blockquote>
Notes DeCONZ2018-06-14T00:00:00+00:00https://blog.timmybankers.nl/2018/06/14/Notes-deconz
<p>Here are some notes and command lines I used to set DeConz up on my Odroid C2. Mainly for my backup, but if someone needs more info, I’m glad to try and remember how I did it.</p>
<h2 id="deconz">Deconz</h2>
<p><code class="language-plaintext highlighter-rouge">sudo apt-get install libqt5serialport5 libqt5websockets5 libqt5sql5 sqlite3</code></p>
<p><code class="language-plaintext highlighter-rouge">sudo dpkg --ignore-depends=wiringpi:armhf -i deconz-2.05.29-qt5.deb</code></p>
<p>Unfortunately apt will not forget the dependency, and keep complaining.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>The following packages have unmet dependencies:
deconz:armhf : Depends: wiringpi:armhf but it is not installable
</code></pre></div></div>
<p>Fix: <code class="language-plaintext highlighter-rouge">sudo nano /var/lib/dpkg/status</code>
Remove <code class="language-plaintext highlighter-rouge">wiringpi</code> from Depends (search for this in this document)</p>
<p><code class="language-plaintext highlighter-rouge">sudo nano /usr/lib/systemd/system/deconz.service</code>
or <code class="language-plaintext highlighter-rouge">/lib/systemd/system/deconz.service</code> (in newer versions of deconz?)</p>
<ul>
<li>rename user to root (no pi) (TODO should use odroid or specific user for this)</li>
<li>comment out (#) capabilities (will give error on start, don’t know why, too old version of systemd?)</li>
</ul>
<p><code class="language-plaintext highlighter-rouge">systemctl daemon-reload</code>
<code class="language-plaintext highlighter-rouge">journalctl -e -u deconz</code></p>
<h2 id="home-assistant">Home Assistant</h2>
<p>https://github.com/ggravlingen/pytradfri/blob/master/script/install-coap-client.sh
Required for pytradfri</p>
Headless Videostream2016-09-13T00:00:00+00:00https://blog.timmybankers.nl/2016/09/13/Headless-Videostream
<p>I have a file server with some media on it that I want to stream to my chromecast. Unfortunately the server is an old 32bit atom server with not too much power. It is even to slow for streaming using Plex Media Server.</p>
<p>I am using a <a href="https://chrome.google.com/webstore/detail/videostream-for-google-ch/cnciopoikihiagdjbjpnocolokfelagl">Chrome Extension/App VideoStream</a> on a Macbook to watch on my Chromecast, while having the media files mounted over AFP.
This is workable but not ideal because I need to have the laptop running, the mount ready, Chrome with VideoStream running and sometimes reindex all my video’s.</p>
<p>I found a way to run this on my simple low-powered Ubuntu server.</p>
<h3 id="1-install-google-chrome">1. Install google-chrome</h3>
<p>NB. Use google-chrome, chromium had problems using the Chromecast for me.</p>
<p>I used the latest i386 version that I could find:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>wget http://bbgentoo.ilb.ru/distfiles/google-chrome-stable_48.0.2564.116-1_i386.deb
sudo dpkg -i google-chrome-stable_48.0.2564.116-1_i386.deb
sudo apt-get -f install
</code></pre></div></div>
<p>The last line was to force the dependencies to be installed correctly.</p>
<h3 id="2-install-videostream">2. Install VideoStream</h3>
<p>Now I connected to my server using <code class="language-plaintext highlighter-rouge">ssh -X</code> to enable X-forwarding. I could start Chrome now and install the Extension from https://chrome.google.com/webstore/detail/videostream-for-google-ch/cnciopoikihiagdjbjpnocolokfelagl . On the first start it will also install the Chromecast extension for you.</p>
<p>Now setup the connection to the mobile VideoStream app (on Android for me).</p>
<h3 id="3-run-it-headless">3. Run it Headless</h3>
<p>Install xvfb to be able to run Chrome headless.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>sudo apt install xvfb
</code></pre></div></div>
<p>And run it! The app-id is the VideoStream app id and also the same as in the URL in the Chromestore.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>xvfb-run google-chrome --app-id=cnciopoikihiagdjbjpnocolokfelagl > /dev/null &
</code></pre></div></div>
<p>Extra:
This might help to kill it again.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>killall Xvfb
</code></pre></div></div>
<h2 id="finally">Finally</h2>
<p>This seems to work, although the streaming starts up a little slower than when I’m using the MacBook. It helps to pause it a bit, to let the slow server catch up and than it seems to run just fine.
I will be trying this out some more the next days and hope it really works as well as I am hoping.</p>
Capturing REPLesent Output2016-06-20T00:00:00+00:00https://blog.timmybankers.nl/2016/06/20/Capturing-REPLesent-Output
<p>For my presentation at ScalaDays I used <a href="https://github.com/marconilanna/REPLesent">REPLesent</a>, a very neat tool that allows you to create slides in the Scala REPL and evaluate the code on the slides.</p>
<p>I wanted to also put this output in the resulting slidedeck for my presentation, so I wrote a script using AppleScript which captures the console window and goes through the slides, screenshotting every slide and it’s REPL output.</p>
<p>You can find the source in the gist below. It assumes you have the presentation and REPL loaded in iTerm and that it is full screen with the tabs/toolbar hidden.</p>
<script src="https://gist.github.com/48c4ff62013b37e866b2e93aa676efc6.js"> </script>
<p>Have fun using it!</p>
Implicits Inspected and Explained @ ScalaDays 20162016-05-11T00:00:00+00:00https://blog.timmybankers.nl/2016/05/11/Implicits-Inspected-and-Explained
<p>In June I gave the presentation on Implicits at Scala Days Berlin. The room was packed, which was really nice to see. In the slidedeck below you can find my slides with included demo screenshots of the REPL demo’s.
Feel free to ask me any questions if it was unclear.</p>
<h2 id="implicits-inspected-and-explained">Implicits Inspected and Explained</h2>
<h3 id="references">References</h3>
<p><a href="https://blog.timmybankers.nl/implicits-inspected-and-explained-slides">Slides</a>
/
<a href="https://github.com/TimSoethout/implicits-inspected-and-explained-slides/tree/gh-pages/code">Code</a></p>
<ul>
<li>Scala documentation:
<ul>
<li><a href="https://docs.scala-lang.org/overviews/collections/conversions-between-java-and-scala-collections.html">Java Converters</a></li>
<li><a href="https://docs.scala-lang.org/tutorials/FAQ/finding-implicits.html">Finding Implicits</a></li>
</ul>
</li>
<li>Book: <a href="https://www.manning.com/books/scala-in-depth">Scala In Depth</a></li>
<li><a href="https://twitter.github.io/effectivescala/">Effective Scala</a></li>
<li>Blog <a href="https://lalitpant.blogspot.nl/2008/08/scala-implicits-dose-of-magic-part-1.html">All Things Runnable</a></li>
<li><a href="https://gitter.im/scala/scala">Scala Gitter channel</a>, special thanks to @Ichoran and @som-snytt</li>
<li><a href="https://github.com/marconilanna/REPLesent">REPLesent</a>, for the demo slides</li>
</ul>
<p>Presentation at ScalaDays Berlin:</p>
<iframe src="//www.slideshare.net/slideshow/embed_code/key/lwOwnjRbNlamT5" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen=""> </iframe>
<div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/TimSoethout/implicits-inspected-and-explained-scaladays-2016-berlin" title="Implicits Inspected and Explained @ ScalaDays 2016 Berlin" target="_blank">Implicits Inspected and Explained @ ScalaDays 2016 Berlin</a> </strong> from <strong><a href="//www.slideshare.net/TimSoethout" target="_blank">Tim Soethout</a></strong> </div>
<p>Previously I also gave the presentation at ScalaDays New York. The presentation was received well.
Below you can see my slides with the demo’s embedded as screenshots.
Feel free to drop any comments and come see my presentation at ScalaDays Berlin in June.</p>
<p>Presentation at ScalaDays New York:</p>
<iframe src="//www.slideshare.net/slideshow/embed_code/key/DVM8IT83VteCGq" width="595" height="485" frameborder="0" marginwidth="0" marginheight="0" scrolling="no" style="border:1px solid #CCC; border-width:1px; margin-bottom:5px; max-width: 100%;" allowfullscreen=""> </iframe>
<div style="margin-bottom:5px"> <strong> <a href="//www.slideshare.net/TimSoethout/implicits-inspected-and-explained" title="Implicits Inspected and Explained" target="_blank">Implicits Inspected and Explained</a> </strong> from <strong><a href="//www.slideshare.net/TimSoethout" target="_blank">Tim Soethout</a></strong> </div>
<iframe width="560" height="315" src="https://www.youtube.com/embed/UHQbj-_9r8A" frameborder="0" allowfullscreen=""></iframe>
Syncing Kobo and Google drive2016-02-21T00:00:00+00:00https://blog.timmybankers.nl/2016/02/21/Syncing-Kobo-And-Google-Drive
<p>I finally made it work again. Some posts ago I wrote about [sending files to my Kobo ereader](/2014/04/19/Send-to-kobo using email on (https://sendtokobo.com/). Unfortunately this service stopped working some time ago.</p>
<p>I still wanted a workflow where I could put the files I want to read on my e-reader at a later time. After a day of struggling I found out a way that works! I can now dump my files in a Google drive folder and sync them at a later time on my Kobo when connected to the internet. No hassle with connecting the e-reader over USB to my machine, and forgetting to do so when at home.</p>
<p>After a factory reset of my Kobo Aura HD, I installed <a href="https://www.preining.info/blog/2016/01/kobo-firmware-3-19-5761-mega-update-ksm-nickel-patch-ssh-fonts/">this latest and greatest version</a> of a firmware containing Kobo Start Menu (KSM), <a href="https://github.com/koreader/koreader">Koreader</a> and telnet/ssh access. Koreader has better support for pdf than the native reader, and KSM allows for easier unix hacks.
An <a href="https://www.preining.info/blog/2015/08/kobo-glohd-firmware-3-17-0-mega-update-ksm-nickel-patch-ssh-fonts/">older post</a> really helped me figuring out how to connect remotely to the Kobo device using: <code class="language-plaintext highlighter-rouge">telnet $IP_OF_KOBO</code> with user <code class="language-plaintext highlighter-rouge">root</code> without password. Previous link also explains how to enable the much safer SSH access instead of telnet.</p>
<p>Next step was inspired by <a href="https://gist.github.com/wernerb/7864141#file-sync-sh">this gist</a> which uses <code class="language-plaintext highlighter-rouge">wget</code> to fetch files from Google drive. I updated it to work for me:</p>
<script src="https://gist.github.com/4c5db37e7bd1a2b4fb49.js"> </script>
<p>Feel free to use this file and insert your own Google drive folder ID for a folder which you have set the sharing to “Anyone with the link can view”.</p>
<p>The file should go somewhere on the kobo device, I put mine in <code class="language-plaintext highlighter-rouge">/mnt/onboard/.adds/kbmenu_user/scripts/sync.sh</code> to make sure it comes up in <code class="language-plaintext highlighter-rouge">custom scripts</code> in the KSM.</p>
<p>Next I ran into another problem because Google uses <code class="language-plaintext highlighter-rouge">https</code> and the <code class="language-plaintext highlighter-rouge">wget</code> provided on the Kobo device is not compiled with the appropriate flags to support this.
Fortunately there was <a href="https://github.com/spMatti/kobo-wget-sync">another attempt on Github</a> doing sync with Google Drive, which actually downloads a version of <code class="language-plaintext highlighter-rouge">wget</code> that does.
I installed this on my device using the instructions, but probably getting only <code class="language-plaintext highlighter-rouge">wget</code> from it would have been enough.</p>
<p>Now I have everything to make this workflow happen:</p>
<ul>
<li>Drop <code class="language-plaintext highlighter-rouge">epub</code>/<code class="language-plaintext highlighter-rouge">pdf</code> file in my shared folder in Google drive.</li>
<li>Enable wifi on the Kobo</li>
<li>Hit <code class="language-plaintext highlighter-rouge">synch.sh</code> in <code class="language-plaintext highlighter-rouge">custom scripts</code></li>
<li>Wait a bit until the downloads are in (the screen flashes)</li>
<li>And enjoy the files being places in <code class="language-plaintext highlighter-rouge">gdrive</code></li>
<li>(You might need to trigger the library refresh somehow, if you use the native Kobo reader application if you are not using Koreader)</li>
</ul>
Scala Amsterdam Meetup talk on mixing Java and Scala in a single module2015-10-29T00:00:00+00:00https://blog.timmybankers.nl/2015/10/29/Scala-Meetup-Amsterdam-talk-on-mixing-java-scala
<p>Last night I gave a presentation on some of the work I have been doing on incorporating Scala into an existing Java Maven project. I did a step by step approach on this while keeping code quality up. A large part was about how to achieve meaningful code coverage on the both of Java and Scala source files.
Tools such as SonarQube can help you to store metrics such as code coverage and issues from FindBugs and ScalaStyle.
Also it is useful to enable compiler flags that warn and help writing better code.</p>
<p>Here you can find the links to the slides and code:</p>
<ul>
<li>
<p>Slides - <a href="https://blog.timmybankers.nl/scala-java-maven-slides">https://blog.timmybankers.nl/scala-java-maven-slides</a></p>
</li>
<li>
<p>Code - <a href="https://github.com/TimSoethout/scala-java-maven-code">https://github.com/TimSoethout/scala-java-maven-code</a></p>
</li>
</ul>
<p>If you’re setting up your own Scala/Java mix up, feel free to use these resources:</p>
<ul>
<li>
<p>scala-sonarqube-docker - <a href="https://github.com/TimSoethout/scala-sonarqube-docker">https://github.com/TimSoethout/scala-sonarqube-docker</a></p>
</li>
<li>
<p>sonar-scala-plugin <a href="https://github.com/TimSoethout/sonar-scala/releases">https://github.com/TimSoethout/sonar-scala/releases</a></p>
</li>
<li>
<p>xml-transform-maven-plugin - <a href="https://github.com/TimSoethout/transform-xml-maven-plugin">https://github.com/TimSoethout/transform-xml-maven-plugin</a></p>
</li>
</ul>
<p>You can also see my previous blog post on the (older) details on the setup: <a href="https://blog.timmybankers.nl/2015/06/07/Mixing-Java-Scala-With-Sonar">https://blog.timmybankers.nl/2015/06/07/Mixing-Java-Scala-With-Sonar</a></p>
Coloured Maven output in fish shell2015-10-20T00:00:00+00:00https://blog.timmybankers.nl/2015/10/20/Coloured-Maven-output-on-Fish
<p>Here is a nice gist I found which will instantaneously give you simple colored maven output in the <a href="https://fishshell.com">fish shell</a>:</p>
<script src="https://gist.github.com/a9b74c537f667a8dd28e.js"> </script>
<p>Just put it in <code class="language-plaintext highlighter-rouge">~/.config/fish/functions/mvn.fish</code> and it will be used when <code class="language-plaintext highlighter-rouge">mvn</code> is used on your shell.</p>
Mixing Java & Scala with Sonar with correct code coverage2015-06-07T00:00:00+00:00https://blog.timmybankers.nl/2015/06/07/Mixing-Java-Scala-With-Sonar
<p>Recently we added Scala to a Java Maven project. This works perfectly fine, until we looked at the Sonar report. It turns out that having nice automated code checks for a combined Java/Scala project is quite hard.
Last week it was solved. This post is to write down the lessons learned, for it might help others.</p>
<p>Using a single language is fine:</p>
<ul>
<li>
<p>Java + Sonar works perfectly fine, using FindBugs, PMD, any code coverage framework you would like to use (Cobertura, Jacoco etc).</p>
</li>
<li>
<p>Scala + Sonar also works fine, using the <a href="https://github.com/1and1/sonar-scala">sonar-scala plugin</a>, ScalaStyle and <a href="https://github.com/RadoBuransky/sonar-scoverage-plugin">Scoverage</a> plugin.</p>
</li>
</ul>
<p>The combination of combined source code makes it hard.</p>
<h1 id="code-coverage-methods-for-java-and-scala">Code coverage methods for Java and Scala</h1>
<p>Cobertura uses instrumentation on the bytecode to scan the coverage. This also works on Scala code, but since the Scala compiler generated a lot more bytecode for case classes, traits etc. The coverage will be 20% lower than it really is.</p>
<p>Another option is Jacoco, which uses an agent by default to instrument the code when it is loaded by the Class loader. This seems to work fine but does not take the Scala code in account properly.</p>
<p>For dedicated Scala coverage Scoverage is a good candidate. Scoverage does statement coverage, which means that it has a more fine-grained knowledge, since it measures coverage per statement and not per line as the most Java coverage tools (such as Cobertura and Jacoco) do. This is more ideal for Scala, since one often writes more expressive code, which means more information and semantics are embedded in a single line of code.
Scoverage only scans Scala source files.</p>
<p>Jacoco turned out not to be an option for us because it interprets the bytecode too strict. In certain cases the Java compiler generates more byte code than you normally cover. For example an expression such as <code class="language-plaintext highlighter-rouge">if(!someBool())</code> results in 4 branches being generated, possibly by first evaluating <code class="language-plaintext highlighter-rouge">someBool()</code> and assigning it to a variable and than doing the actual check. Not all those branches are correctly covered by unit tests, which results in lower coverage. On the github issue tracker it is promised in <a href="https://github.com/jacoco/jacoco/issues/15">a two year old issue</a> these kind of issues will be solved by filters for the report generation, but it is not available yet.</p>
<p>For us this was not acceptable since we want to automatically check the coverage levels, and if the level is tens of percentages lower depending on which kind of statements we use, it would not help us very much.</p>
<h1 id="the-solution">The Solution</h1>
<p>The approach became: Cobertura for Java source files, Scoverage for Scala source files.
Also we needed Sonar version of 4.5 or higher (we picked 5.1), because it supports mixed projects with multiple languages.</p>
<p>This resulted us to two challenges:</p>
<ol>
<li>Both the Java CoberturaSensor and Scala CoberturaSensor kicked in when <code class="language-plaintext highlighter-rouge">mvn sonar:sonar</code> was run, which resulted in duplicate coverage information for the same file, failing the Sonar run.</li>
<li>Both Cobertura’s coverage file and Scoverage’s coverage file contain information for the Scala source files.</li>
</ol>
<h2 id="1-multiple-coberturasensors">1. Multiple CoberturaSensor’s</h2>
<p>After a lot of debugging I found that it was not possible to fix this. Both CoberturaSensors use the same <code class="language-plaintext highlighter-rouge">reportPath</code> property to find the coverage file. I wanted to disable the Scala Cobertura scanner since I did not want to use Cobertura for the Scala sources. In the end I had to change the Scoverage Sonar plugin code to make this happen by introducing a different property <code class="language-plaintext highlighter-rouge">sonar.scala.cobertura.reportPath</code> which is used if specified. I did this to point the <code class="language-plaintext highlighter-rouge">ScalaCoberturaSensor</code> to a non-existing file so it was skipped and only the Java <code class="language-plaintext highlighter-rouge">CoberturaSensor</code> would actually be run and submit to Sonar.
See my <a href="https://github.com/TimSoethout/sonar-scala">fork</a> and the <a href="https://github.com/1and1/sonar-scala/pull/1">pull request</a> for the change.</p>
<h2 id="2-duplicate-coverage-for-scala-source-files">2. Duplicate coverage for Scala source files</h2>
<p>Now the problem became coverage information being inserted twice by the now only (Java) Cobertura run, which still contained the Scala source coverage information and also by the Scoverage report.</p>
<p>Since I only wanted the qualitatively higher coverage of Scoverage I decided to delete the scala coverage information from Cobertura’s <code class="language-plaintext highlighter-rouge">coverage.xml</code>. Fortunately this turned out to be quite easy by removing everything which is matched by this xpath: <code class="language-plaintext highlighter-rouge">//class[contains(@filename,'.scala')]</code>.</p>
<p>From what it seems Sonar interprets the coverage information itself again, so the calculated and now incorrect averages and totals which remain in the filtered coverage xml will be ignored by Sonar.</p>
<p>The next step was removing this as part of our automated build. I converted this command to a Ruby one-liner, but ran into issues the <code class="language-plaintext highlighter-rouge">maven-exec-plugin</code> trying to run this, which had to do with ruby dependencies which were somehow not available during the build.</p>
<p>Another approach was using <code class="language-plaintext highlighter-rouge">xml-maven-plugin</code> and an XSLT template to convert the cobertura XML. This was nice, but I wanted it also to work on a multi module maven project, where I could configure this in the parent pom.xml. Unfortunately the reference to the XSLT file was relative to the project being build, which would mean I had to configure this again for every module in the project or move the XSLT to a absolute location. Both are not acceptable.</p>
<p>I decided to create a maven plugin which could do this for me. This way I know for sure it will work correctly for multimodule maven projects and also on all platforms including the build server.</p>
<p>I release the (minimal viable) plugin on maven central, which was a nice experience in itself.
The source can be found <a href="https://github.com/TimSoethout/transform-xml-maven-plugin">here</a> and it can be include in your project like this:</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt"><build></span>
<span class="nt"><plugins></span>
<span class="nt"><plugin></span>
<span class="nt"><groupId></span>nl.timmybankers.maven<span class="nt"></groupId></span>
<span class="nt"><artifactId></span>transform-xml-maven-plugin<span class="nt"></artifactId></span>
<span class="nt"><version></span>1.0.0<span class="nt"></version></span>
<span class="nt"><executions></span>
<span class="nt"><execution></span>
<span class="nt"><phase></span>prepare-package<span class="nt"></phase></span>
<span class="nt"></execution></span>
<span class="nt"></executions></span>
<span class="nt"><goals></span>
<span class="nt"><goal></span>transform-xml<span class="nt"></goal></span>
<span class="nt"></goals></span>
<span class="nt"><configuration></span>
<span class="nt"><inputXmlPath></span>${project.build.directory}/site/cobertura/coverage.xml<span class="nt"></inputXmlPath></span>
<span class="nt"><outputXmlPath></span>${sonar.build.directory}/${sonar.cobertura.reportPath}<span class="nt"></outputXmlPath></span>
<span class="nt"><xpath></span>//class[contains(@filename,'.scala')]<span class="nt"></xpath></span>
<span class="nt"><action></span>DELETE<span class="nt"></action></span>
<span class="nt"><skipOnFileErrors></span>true<span class="nt"></skipOnFileErrors></span>
<span class="nt"></configuration></span>
<span class="nt"></plugin></span>
<span class="nt"></plugins></span>
<span class="nt"></build></span></code></pre></figure>
<p>For now it only support the <code class="language-plaintext highlighter-rouge">DELETE</code> action for the usecase as described above.
Now I can run my build including publish to Sonar using this one liner:
<code class="language-plaintext highlighter-rouge">mvn clean cobertura:cobertura scoverage:report prepare-package sonar:sonar</code>
Make sure that <code class="language-plaintext highlighter-rouge">cobertura.report.format</code> is set to <code class="language-plaintext highlighter-rouge">xml</code> which will result in the coverage information being available in <code class="language-plaintext highlighter-rouge">target/site/cobertura/coverage.xml</code>.</p>
<p>My sonar properties in maven:</p>
<figure class="highlight"><pre><code class="language-xml" data-lang="xml"><span class="nt"><sonar-maven-plugin.version></span>2.6<span class="nt"></sonar-maven-plugin.version></span>
<span class="nt"><sonar.jdbc.driver></span>org.postgresql.Driver<span class="nt"></sonar.jdbc.driver></span>
<span class="nt"><sonar.jdbc.url></span>jdbc:...<span class="nt"></sonar.jdbc.url></span>
<span class="nt"><sonar.host.url></span>http://...:9000/<span class="nt"></sonar.host.url></span>
<span class="nt"><sonar.core.codeCoveragePlugin></span>scoverage<span class="nt"></sonar.core.codeCoveragePlugin></span>
<span class="nt"><sonar.java.coveragePlugin></span>cobertura<span class="nt"></sonar.java.coveragePlugin></span>
<span class="nt"><sonar.junit.reportsPath></span>target/surefire-reports<span class="nt"></sonar.junit.reportsPath></span>
<span class="nt"><sonar.scoverage.reportPath></span>target/scoverage.xml<span class="nt"></sonar.scoverage.reportPath></span>
<span class="nt"><sonar.cobertura.reportPath></span>target/cobertura-without-scala.xml<span class="nt"></sonar.cobertura.reportPath></span>
<span class="nt"><sonar.scala.cobertura.reportPath></span>/target/nonexisting.xml<span class="nt"></sonar.scala.cobertura.reportPath></span>
<span class="nt"><sonar.sources></span>src<span class="nt"></sonar.sources></span>
<span class="nt"><sonar.exclusions></span>src/test/**<span class="nt"></sonar.exclusions></span>
<span class="nt"><sonar.sourceEncoding></span>UTF-8<span class="nt"></sonar.sourceEncoding></span></code></pre></figure>
<h1 id="conclusion">Conclusion</h1>
<p>It was quite an effort to get this working, but in the end I am happy with the result. I hope others will also benefit from this.</p>
<p>Long story short: Cobertura gives the best coverage information for the Java code, Scoverage for the Scala code. To make sure no duplicate coverage information is send to Sonar, I changed the sonar-scoverage-plugin to ignore the report for the <code class="language-plaintext highlighter-rouge">ScalaCoberturaSensor</code> and removed the coverage information for the Scala source from Cobertura’s coverage xml using a maven plugin. Using this method I can run the Sonar scan directly using maven for any mixed Java/Scala project.</p>
Dockercon Amsterdam2014-12-22T00:00:00+00:00https://blog.timmybankers.nl/2014/12/22/Dockercon-Amsterdam
<p><img src="https://blog.timmybankers.nl/assets/images/docker_wave_whale.svg" alt="Dockercon 2014" /></p>
<p>Dockercon had an Amsterdam edition this 4th and 5th December.
This post is to write down some notes and insights for this conference.</p>
<p><a href="https://www.docker.com">Docker</a> is a community open source project, with a company called Docker inc doing much of the work.</p>
<h1 id="opening-words">Opening words</h1>
<p>Although the conference is not that strict with timing, the content is interesting. The keynote starts off with an overview of the state and history of Docker. 700 contributors, 65000 forks, 67 million downloads.
The adaptation is also growing, with big companies starting to use and implement it.</p>
<p>To accommodate the big community the whole flow of pull requests is formalised and even falls under an SLA.</p>
<p>The problem Docker is trying to solve is the separation of Content creation and Production. The idea is to let the engineers focus on content creation and to make the flow to production as painless as possible.</p>
<p>The big open question is “How to do Orchestration?”.</p>
<h1 id="ing-keynote">ING keynote</h1>
<p>At the moment I am employed by ING, so it is interesting to see us at more and more conferences. ING tries to set up an engineering culture and also shows carry out this view to the world. One of those things is speaking at conferences about the journey. One of those conferences is DockerCon. Our Chief Architect Henk Kolk spoke about going from waterfall to agile, pointing out that a bank is an IT company. Especially the quote about throwing out all project managers and business analysts made an impact on the crowd. I had many conversations about our culture when they found out I am working at ING. Really great my working environment is quite great.</p>
<h1 id="docker-keynotes">Docker Keynotes</h1>
<p>There were many presentation by employees of Docker inc. Most of them were from CTO Solomon Hykes, who is a very good speaker. Interesting is the way Docker inc. works on their open source product. In only 22 months, the project has grown to more than 700 committers. This gives an interesting scaling problem.</p>
<p>Policy and process changes are done via pull requests. Instead of horribly long change documents from the governments, you actually use a diff to specify the changes. Then people can vote in the pull requests.</p>
<p>To scale the actual development functionality is grouped and assigned to a group of commiters, with a tech lead. This group decides to merge a pull request if it concerns some of their code.
Also all code written by employees of Docker Inc. has to go through the same procedure, and is thus evaluated by the community as well.
I think this is a great system, which is scalable and community driven.</p>
<p>Another great aspect was the release of a couple of new <a href="Docker products">http://blog.docker.com/2014/12/announcing-docker-machine-swarm-and-compose-for-orchestrating-distributed-apps/</a>. Docker Machine, which is like <a href="https://boot2docker.io">Boot2Docker</a>, but also aimed at docker hosts anywhere. Docker Swarm a cluster manager for docker. Docker Compose, inspired by <a href="https://boot2docker.io">Fig</a>, for easily managing multi container apps. Dockerhub Enterprise, basically dockerhub on premise.
For each product one of the team members actually working on the product did the introduction and the demo.</p>
<h3 id="takeaways">Takeaways</h3>
<p>It seems the Docker ecosystem is maturing really fase. Good practices as single purpose applications, immutable servers and microservices fit nicely in the promise of Docker. Docker Cluster management is really starting to get of, and hopefully will be ready so to use in a large production environment.</p>
<p>What I really like is the ability to let teams have the control. Docker enables them to pick any technology they are most productive in, as long as it can run in a linux process.</p>
<p>And more non-functionally: Many hipsters, free T-shirts and good food and atmosphere.</p>
<h2 id="some-technology-to-keep-in-mind">Some technology to keep in mind</h2>
<ul>
<li>Docker Machine, Swarm, Compose and Dockerhub Enterprise</li>
<li>Flocker, Docker cluster manager with a networking solution over Docker hosts</li>
<li>Mesos, clustering for Docker, partnering up with Docker Inc.</li>
<li>Docker-builder, streamlining building and piblishing of images</li>
</ul>
Notes on the Advanced Akka course2014-07-15T12:00:00+00:00https://blog.timmybankers.nl/programming/2014/07/15/Notes-on-the-Advanced-Akka-course<p>The Advanced Akka course is provided by Typesafe and is aimed at teaching advanced usages of Akka. The course covers the basics of Akka, Remoting, Clustering, Routers, CRDTs, Cluster Sharding and Akka Persistance. The following post starts with a general introduction to Akka and presents the takeaways from the course as we experienced them.</p>
<h2 id="a-general-overview-of-akka">A general overview of Akka</h2>
<p>The reader which is already familiar with Akka can skip this section.</p>
<p>According to the Akka site this is Akka:</p>
<blockquote>
<p>Akka is a toolkit and runtime for building highly
concurrent, distributed, and fault tolerant event-driven
applications on the JVM.</p>
</blockquote>
<p>Akka achieves this by using Actors.</p>
<blockquote>
<p>Actors are very lightweight concurrent entities.</p>
</blockquote>
<p>Each Actor has a corresponding mailbox stored separately from the Actor. The Actors together with their mailboxes reside in an ActorSystem. Additionally, the ActorSystem contains the Dispatcher which executes the handling of a message by an actor. Each Actor only handles a single message at a time.</p>
<p>In Akka everything is remote by design and philosophy. In practice this means that each Actor is identified by its <code class="language-plaintext highlighter-rouge">ActorRef</code>. This is a reference to the actor which provides <em>Location Transparency</em>.</p>
<p>Actors communicate with each other by sending messages to an another Actor through an <code class="language-plaintext highlighter-rouge">ActorRef</code>. This sending of the message takes virtually no time.</p>
<p>In addition to <code class="language-plaintext highlighter-rouge">ActorRef</code> there exists also an <code class="language-plaintext highlighter-rouge">ActorSelection</code> which contains a path to one or more actors. Upon each sending of the message the path is traversed until the actor is found or when not. No message is send back when the actor is not found however.</p>
<p>States: Started - Stopped - Terminated
If an actor enters the <code class="language-plaintext highlighter-rouge">Stopped</code> state it first stops its child actors before entering the <code class="language-plaintext highlighter-rouge">Terminated</code> state.</p>
<h3 id="best-practices">Best-practices</h3>
<p>Import the <code class="language-plaintext highlighter-rouge">context.dispatcher</code> instead of the global Scala ExecutionContext. It is the ExecutionContext managed by Akka. Using the global context causes the Actors to be run in the global Thread pool.</p>
<p>You should not use <code class="language-plaintext highlighter-rouge">PoisonPill</code> as it will be removed from future versions of Akka since it is not specific enough. Roll your own message to make sure the appropriate actions for graceful shutdown are done. Use <code class="language-plaintext highlighter-rouge">context.stop</code> to stop your actor.</p>
<p>Place your business logic in a separate trait and mix it in to the actor. This increases testability as you can easily unit test the trait containing the business logic. Also, you should put the creation of any child actors inside a separate method so the creation can be overridden from tests.</p>
<h2 id="remoting">Remoting</h2>
<p>With the Remoting extension it is possible to communicate with other Actor Systems. This communication is often done through <code class="language-plaintext highlighter-rouge">ActorSelection</code>s instead of <code class="language-plaintext highlighter-rouge">ActorRef</code>.</p>
<p>Remoting uses Java serialisation by default which is slow and fragile in light of changing definitions. It is possible and recommended to use another mechanism such as Google Protobuf.</p>
<h2 id="clustering">Clustering</h2>
<p>Akka has a simple perspective on cluster management with regards to split-brain scenarios. Nodes become dead when they are observed as dead and they cannot resurrect. The only way a node can come up again is if it registers itself again.</p>
<p>When a net split happens the other nodes are marked as <em>unreachable</em>. When using a Singleton, this means that only the nodes that can reach the singleton will access it. The others will not decide on a new Singleton in order to prevent a split-brain scenario.</p>
<p>Another measure against split-brain is contacting the seed nodes in order. The first seed node is required to be up.</p>
<p>The seed nodes are tried in order.</p>
<h2 id="fsm">FSM</h2>
<p>There is an library for writing finite state machines called FSM. For larger actors it can be useful to use the FSM. Otherwise stick to pure <code class="language-plaintext highlighter-rouge">become</code> and <code class="language-plaintext highlighter-rouge">unbecome</code>.</p>
<p>FSM also has an interval timer for scheduling messages. However, the use of <code class="language-plaintext highlighter-rouge">stay()</code> resets the interval timer therefore you could have issues with never executing what is at the end of the timer.</p>
<h2 id="routers">Routers</h2>
<p>There are two different kinds of routers: Pools and Groups. Pools are in charge of their own children and they are created and killed by the pool. Groups are configured with an <code class="language-plaintext highlighter-rouge">ActorSelection</code> that defines the actors to which the group should sent its messages. There are several implementations: Consistent Hash, Random, Round Robin, BroadCast, Scatter - Gather First, and Smallest Mailbox. The names are self-explanatory.</p>
<h2 id="synchronisation-of-data-with-crdts">Synchronisation of data with CRDTs</h2>
<p>Synchronising data between multiple nodes can be done by choosing your datatype so that If the timestamps and events are generated in one place no duplicate entries occur. Therefore merging a map from a different node in your map is easily done by copying entries you don’t already have to your own data.</p>
<p>This can be implemented by letting each member node broadcast which data-points they have. Each node can then detect which information is lacking and request the specific data from the node that claimed to have the data. At some future point in time all nodes will be in sync. This is called <em>eventual consistency</em>.</p>
<h2 id="singleton">Singleton</h2>
<p>If you have a singleton cluster manager proxy it only starts when the cluster is formed. A cluster is formed if a member connects. The proxy will then pass on the buffered messages.</p>
<h2 id="cluster-sharding">Cluster Sharding</h2>
<p>Sharding is a way to split up a group of actors in a cluster. This can be useful if the group is too large to fit in the memory of a single machine. The Cluster Sharding feature takes care of the partitioning of the actors using a hash you have to define with a function <code class="language-plaintext highlighter-rouge">shardResolver</code>. The sharded actors can be messaged with an unique identifier using <code class="language-plaintext highlighter-rouge">ClusterSharding(system).shardRegion("Counter")</code> which proxies the message to the correct actor.
<code class="language-plaintext highlighter-rouge">ClusterSharding.start</code> is what the Manager is to Singletons.</p>
<p>It is recommended to put the sharding functions into a singleton object for easy re-use of your shards, containing the functions to start the sharding extension and proxy to the shard etc. It is also convenient to adds <code class="language-plaintext highlighter-rouge">tell</code> and <code class="language-plaintext highlighter-rouge">initialise</code> helper functions to respectively send a message and initialise the actor by its unique id.</p>
<h2 id="akka-persistence">Akka Persistence</h2>
<p>Akka persistence uses a Journal to store which messages were processed. One of the supported storage mechanisms is Cassandra. It is also possible to use a file-based journal which, of course, is not recommended.</p>
<p>In the current version of Akka there are two approaches to persistence: command sourcing and event sourcing. Simply but, in command storing each message is first persisted and then offered to the actor to do as it pleases whereas in event sourcing only the results of actions are persisted. The latter is preferred and will be the only remaining method in following versions.</p>
<p>Both methods support storing a snapshot of the current state and recovering from it.</p>
<h3 id="command-sourcing">Command Sourcing</h3>
<p>The main problem with command sourcing lies in that <em>all</em> messages are replayed. This includes requests for information from dead actors which wastes resources for nothing. Moreover, in case of errors, the last message that killed the actor is also replayed and probably killing the actor again in the proces.</p>
<h3 id="event-sourcing">Event Sourcing</h3>
<p>With event sourcing one only stores state changing events. Events are received by the <code class="language-plaintext highlighter-rouge">receiveRecover</code> method. <em>External</em> side-effects should be performed in the <code class="language-plaintext highlighter-rouge">receive</code> method. The code for the internal side-effect of the event should be the same in both the <code class="language-plaintext highlighter-rouge">receive</code> and <code class="language-plaintext highlighter-rouge">receiveRecover</code> methods. The actor or trait for this will be named <code class="language-plaintext highlighter-rouge">PersistentActor</code>.</p>
<h3 id="actor-offloading">Actor offloading</h3>
<p>One can use Akka Persistence to “pause” long living actors, e.g. actors that have seen no activity lately. This frees up memory. When the actor is needed again it can be safely restored from the persistence layer.</p>
<h2 id="tidbits">Tidbits</h2>
<p>Akka 3 is to be released “not super soon”. It will contain typed actors. The consequence of this is that the sender field will be removed from the actor. Therefore, for request-response, the <code class="language-plaintext highlighter-rouge">ActorRef</code> should be added to the request itself.</p>
<h2 id="concluding">Concluding</h2>
<p>The Advanced Akka course gives a lot of insights and concrete examples of how to use the advanced Akka features of clustering, sharding and persisting data across multiple nodes in order to create a system that really is highly available, resilient and scalable. It also touches on the bleeding edge functionalities, the ideas and concepts around it and what to expect next in this growing ecosystem.</p>
ScalaCheck in ScalaTest2014-06-05T00:00:00+00:00https://blog.timmybankers.nl/2014/06/05/ScalaCheck-in-ScalaTest
<p>Today I held a presentation at the Scala Community at my employer about ScalaCheck.
ScalaCheck is a property based testing tool, which allows you to specify properties using predicates such as: \( \forall s : s.reverse.reverse \equiv s \), which denotes that for all Strings <code class="language-plaintext highlighter-rouge">s</code> when you reverse <code class="language-plaintext highlighter-rouge">s</code> twice it should equal the original <code class="language-plaintext highlighter-rouge">s</code>.</p>
<p>Please see the <a href="/PropertyBasedTestingScalaCheck/index.html">Slides</a>, which are created using the nice <a href="https://github.com/hakimel/reveal.js/">RevealJS</a>.</p>
<p>Maybe even more interesting are the code examples which can be found in the <a href="https://github.com/TimSoethout/PropertyBasedTestingScalaCheck/tree/master/code">code folder</a> of <a href="https://github.com/TimSoethout/PropertyBasedTestingScalaCheck">my github repo for the presentation</a>.
There are a couple of files with accompanying tests. <code class="language-plaintext highlighter-rouge">PropertiesTest.scala</code> shows the ScalaCheck way of writing an executable test file which checks properties.
<code class="language-plaintext highlighter-rouge">ReverseExampleTest.scala</code> contain some simple properties using ScalaTest’s <code class="language-plaintext highlighter-rouge">GeneratorDrivenPropertyChecks</code>, which using ScalaCheck under the hood.
<code class="language-plaintext highlighter-rouge">IbanExampleTest.scala</code> contains a more interesting example where an implementation that calculates IBANs from old bank account numbers is tested.</p>
Send To Kobo2014-04-19T00:00:00+00:00https://blog.timmybankers.nl/2014/04/19/Send-to-kobo
<p>A while ago I bought my first e-reader. The <a href="https://www.kobo.com/koboaurahd">Kobo Aura HD</a>. It is a very nice device with a clear screen and it turned out to run some kind of linux.</p>
<p>You can copy a file with name <code class="language-plaintext highlighter-rouge">KoboRoot.tgz</code> to the <code class="language-plaintext highlighter-rouge">.kobo</code> directory when mounted and as soon as you unmount and disconnect the device, it will copy the contents of the file into the root file system of the ereader. Thus there is a way to make changes to your device!</p>
<p>Some time ago I even managed to install an ssh server on the ereader. See https://wikisec.free.fr/mobile/kobo.html#ssh</p>
<p>One of the things I miss on my Kobo is an easy way to send files to the device. You must either connect the reader and copy over the file or run a Calibre server and browse to it on the webbrowser on the device. This is ofter far to much work especially if you want to send smaller texts such as blog posts and news articles to your device.</p>
<p>More recently I stumbled upon <a href="https://sendtokobo.com/">a website</a> that uses the method described above to let the device receive messages from an email-adress. I signed up and am trying it out now. This would be an ideal functionally to make my ereader more useful!</p>
Poor man's VPN2014-03-24T00:00:00+00:00https://blog.timmybankers.nl/2014/03/24/Poor-man's-VPN
<p>Some time ago I was wondering about security on open WiFi networks. These networks provide no security on the transport layer and everyone who wants can sniff your traffic can do that freely over the air. Fire up Wireshark of your on a public hotspot and you can see for yourself. Something like <a href="https://www.eff.org/https-everywhere">HTTPS Anywhere</a> can make it a little bit safer, but even if you’re browsing over HTTPS it is not possible to read the content of the requests, but the target domains can still be read. Also not every site is available over HTTPS.</p>
<p>A VPN is a way to route all your traffic to the VPN site, such that nobody can sniff your data. There are of course all kinds of expensive tools for this. But another way is to just use plain old SSH-tunnelling.</p>
<p>A tool called <a href="https://github.com/apenwarr/sshuttle">Sshuttle</a> uses this technique to forward all your traffic in a transparent way to any machine on which you have SSH access. It only required local root privileges, but not necessarily on the “VPN server”.</p>
<p>You can install Sshuttle with <code class="language-plaintext highlighter-rouge">brew install sshuttle</code> and you can connect your vpn with <code class="language-plaintext highlighter-rouge">sshuttle --dns --pidfile=/tmp/sshuttle.pid --remote=remote.ssh.machine 0/0</code>. For extra safety also the DNS server on the other side is used.</p>
<p>This can for example connect you to a machine you have at home, or at your office. Effectively all your traffic is encrypted via the SSH connection before it leaves your computer and only send onto the internet by the VPN machine, meaning no sniffer on the open WiFi can see what you’re sending over the air. Only packets to the VPN domain are available.</p>
<p>One drawback of this approach is of course some speed loss by depending on multiple connections. Sshuttle does a neat job preventing tcp-over-tcp.</p>
<h2 id="aws">AWS</h2>
<p>If you don’t have an SSH server available, you can also use a <a href="https://aws.amazon.com/free/">free instance of Amazon AWS</a>. The only downside is the required credit card. I registered and had a simple free Linux instance running in minutes.</p>
<p>I used the private key generated by AWS and added the host to my <code class="language-plaintext highlighter-rouge">~/.ssh/config</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>Host awsproxy
HostName AWSIP
User ec2-user
IdentityFile ~/.ssh/AWS.pem
</code></pre></div></div>
<p>Now I can connect sshuttle to AWS with: <code class="language-plaintext highlighter-rouge">sshuttle --dns --daemon --pidfile=/tmp/sshuttle.pid --remote=awsproxy 0/0</code>.
The speed is not ideal, but at least it is safe (as long as you trust Amazon…) and as an extra I can also watch Hulu from outside the US.</p>
<p>For convenience I also created some Fish shell functions to create and destroy the tunnel/vpn:</p>
<pre><code class="language-fish">function tunnelaws
sshuttle --dns --daemon --pidfile=/tmp/sshuttle.pid --remote=awsproxy 0/0
end
function tunnelx
if test -f /tmp/sshuttle.pid
kill (cat /tmp/sshuttle.pid)
echo "Disconnected."
end
end
</code></pre>
Wifi channels and OSX2014-03-22T00:00:00+00:00https://blog.timmybankers.nl/2014/03/22/Wifi-channels-and-osx
<p>I found out that my OSX did not see every wifi channel available. See <code class="language-plaintext highlighter-rouge">System Information</code> > <code class="language-plaintext highlighter-rouge">System Report</code>, then <code class="language-plaintext highlighter-rouge">Network</code> > <code class="language-plaintext highlighter-rouge">Wifi</code>:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>en0:
Card Type: AirPort Extreme (0x14E4, 0xD1)
[..]
Country Code: US
Supported PHY Modes: 802.11 a/b/g/n
Supported Channels: 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 36, 40, 44, 48, 52, 56, 60, 64, 100, 104, 108, 112, 116, 120, 124, 128, 132, 136, 140
[..]
</code></pre></div></div>
<p>Due to regulations the allowed wifi channels in the world differ in location. See <a href="https://en.wikipedia.org/wiki/List_of_WLAN_channels#Interference_Concerns">wikipedia</a>.</p>
<p>My adapter has configured itself to <code class="language-plaintext highlighter-rouge">Country Code: US</code> and <code class="language-plaintext highlighter-rouge">US</code> does not allow wifi (2.4Ghz) channels 12 and 13, while <code class="language-plaintext highlighter-rouge">NL</code> for example does.</p>
<p>The country code is not hardcoded, but it turns out the wifi adapter picks the country code from the first wifi network it discovers. So turning off and on your wifi can make it possible to use more channels.
Some blogs say that changing the locale and location settings and rebooting fixes this issue, but it is really the rebooting that triggers the wifi adapter reset and having luck that makes this work.</p>
<p>The sad news is that there seems not way to control this behaviour.</p>
sudo -E2014-03-18T00:00:00+00:00https://blog.timmybankers.nl/2014/03/18/Sudo--E
<p>If you even find yourself in a situation where you need to run a superuser command in your carefully crafted terminal state, use:</p>
<p><code class="language-plaintext highlighter-rouge">sudo -E</code></p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>NAME
sudo, sudoedit — execute a command as another user
[..]
-E, --preserve-env
Indicates to the security policy that the user wishes to preserve their existing environment
variables. The security policy may return an error if the user does not have permission to pre‐
serve the environment.
</code></pre></div></div>
<p>My use case was running a build script that fetched all its dependencies from the internet. My proxy settings were configured for my own user, but not for the root user. This command saved me enourmous amounts of headaches.</p>
Appscale2014-03-18T00:00:00+00:00https://blog.timmybankers.nl/2014/03/18/Appscale
<h1 id="getting-started-with-appengine">Getting started with AppEngine</h1>
<p>I used this example project.
<code class="language-plaintext highlighter-rouge">git clone git@github.com:GoogleCloudPlatform/appengine-angular-guestbook-java.git</code></p>
<p>The basic documentation which this is based upon can also be found on https://github.com/GoogleCloudPlatform/appengine-angular-guestbook-java.</p>
<h2 id="local-development">Local Development</h2>
<ul>
<li>Make sure you have at least
<ul>
<li>Maven version 3.1.0: <code class="language-plaintext highlighter-rouge">mvn -version</code></li>
<li>Java 7: <code class="language-plaintext highlighter-rouge">javac -version</code></li>
</ul>
</li>
</ul>
<p>If you are in the app directory, simply run: <code class="language-plaintext highlighter-rouge">mvn appengine:devserver</code> and you can reach the sample app on <code class="language-plaintext highlighter-rouge">http://localhost:8080</code>.</p>
<h2 id="deploying-on-appengine">Deploying on AppEngine</h2>
<h3 id="setting-up-appengine">Setting up AppEngine</h3>
<ul>
<li>Login to <a href="https://appengine.google.com/">Google App Engine</a></li>
<li>
<p>Create an account with a specific application name.</p>
</li>
<li>
<p>In the file <code class="language-plaintext highlighter-rouge">src/main/webapp/WEB-INF/appengine-web.xml</code> change the <code class="language-plaintext highlighter-rouge"><application></code> value from <code class="language-plaintext highlighter-rouge">gae-angular-guestbook</code> to your chosen application name.</p>
</li>
<li>
<p>Run <code class="language-plaintext highlighter-rouge">mvn appengine:update</code></p>
<p>Behind a proxy you need to pass the proxy parameters specifically since they seem to not been inherited:</p>
<p><code class="language-plaintext highlighter-rouge">mvn -DproxySet=true -Dhttp.proxyHost=localhost -Dhttp.proxyPort=3128 appengine:update</code></p>
<p>See my other <a href="/2014/03/17/Using-Cntlm-to-bypass-your-corporate-proxy">post to setup a proxy forwarder</a>.</p>
</li>
<li>Now see and behold the nice app at <code class="language-plaintext highlighter-rouge">http://applicationName.appspot.com/</code>. Note that it may take some time to fire up the datastore.</li>
</ul>
<h2 id="deploying-on-appscale">Deploying on AppScale</h2>
<ul>
<li>Make sure you have
<ul>
<li>A working Appscale instance. You can use the instructions on <code class="language-plaintext highlighter-rouge">https://github.com/AppScale/appscale/wiki/AppScale-on-VirtualBox</code> to set it up. Maybe I’ll make another post about this.</li>
<li><a href="https://github.com/AppScale/appscale-tools">appscale-tools</a> installed. See my <a href="https://github.com/AppScale/appscale-tools/pull/384">pull request</a> (edit: merged by now) on how to get it installed on debian-unstable or use my <a href="https://github.com/TimSoethout/appscale-tools">fork</a>.</li>
</ul>
</li>
</ul>
<p>When trying to run <code class="language-plaintext highlighter-rouge">appscale deploy appengine-angular-guestbook-java-directory</code> in the folder where your <code class="language-plaintext highlighter-rouge">Appscalefile</code> file is , you will get an error message : <code class="language-plaintext highlighter-rouge">Couldn't find an app.yaml or appengine-web.xml file in appengine-angular-guestbook-java/..</code></p>
<p>From <a href="https://groups.google.com/d/msg/appscale_community/--YyKd6xwts/NaoE1VDSu5cJ">one of the community questions</a> it turns out that AppScale needs to have the program in a directory named <code class="language-plaintext highlighter-rouge">war</code>. Also old documentation states the following:</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>For Java apps, make sure you point "appscale deploy" at the directory that contains your "war" directory (not the "war" directory itself). If you don't, you'll get the same error message as above.
</code></pre></div></div>
<p>We can do this:</p>
<ul>
<li>Make sure the example project is build and packaged: <code class="language-plaintext highlighter-rouge">mvn clean package</code></li>
<li>Rename (or copy) <code class="language-plaintext highlighter-rouge">target/appengine-angular-guestbook-java-1.0-SNAPSHOT/</code> to <code class="language-plaintext highlighter-rouge">target/war</code></li>
<li>Deploy the application (from directory with <code class="language-plaintext highlighter-rouge">Appscalefile</code>): <code class="language-plaintext highlighter-rouge">appscale deploy appengine-angular-guestbook-java/target</code></li>
<li>Visit your AppScale hosted site on the url given in the output.
<ul>
<li>Don’t forget to add to IP(-range) of the AppScale VM to your no-proxy list to access it directly in case of a proxy.</li>
</ul>
</li>
<li>. . .</li>
<li>Profit</li>
</ul>
<h2 id="more-to-do">More to do!</h2>
<ul>
<li>Add deployment on OpenStack</li>
<li>Add deployment on Docker</li>
</ul>
Using Cntlm To Bypass Your Corporate Proxy2014-03-17T00:00:00+00:00https://blog.timmybankers.nl/2014/03/17/Using-Cntlm-to-bypass-your-corporate-proxy
<p><a href="https://cntlm.sourceforge.net/">Cntlm</a> can be used as a forwarding proxy for an enterprise NTLM proxy on your development machine. This way programs that do not support NTLM can use Cntlm to access the outside world. It also removes the need to store your proxy credentials in your bash scripts, since only the NTLM-token is stored in your Cntlm config.</p>
<p>Each user using this can use his own configuration file (with hashed credentials) and a custom port or use the default <code class="language-plaintext highlighter-rouge">/etc/cntlm.conf</code> for the startup daemon.</p>
<p>Instal Cntlm using your packet manager:</p>
<p><code class="language-plaintext highlighter-rouge">sudo apt-get install cntlm</code></p>
<p>Place cntlm.conf somewhere in your home directory and fill the UserName, PassNTLMv2 and Listen port.</p>
<div class="language-plaintext highlighter-rouge"><div class="highlight"><pre class="highlight"><code>#
# Cntlm Authentication Proxy Configuration
#
# NOTE: all values are parsed literally, do NOT escape spaces,
# do not quote. Use 0600 perms if you use plaintext password.
#
# Username: Your proxy user
Username USERNAME
Domain DOMAIN
# Generate Auth and PassNTLMv2 with `/usr/sbin/cntlm -v -c cntlm.conf -M "https://www.google.com"` (asks for password once for Username)
Auth NTLMv2
PassNTLMv2 <hash>
# List of parent proxies to use. More proxies can be defined
# one per line in format <proxy_ip>:<proxy_port>
#
Proxy my.corporate.proxy:8080
Proxy my.backup.corporate.proxy:8080
# List addresses you do not want to pass to parent proxies
# * and ? wildcards can be used
#
NoProxy localhost, 127.0.0.*, *.local
# Specify the port cntlm will listen on
# You can bind cntlm to specific interface by specifying
# the appropriate IP address also in format <local_ip>:<local_port>
# Cntlm listens on 127.0.0.1:3128 by default
#
# Choose your custom port here
Listen 3128
</code></pre></div></div>
<p>There is no need to put your password in the config file, since you can generate an authentication token using: <code class="language-plaintext highlighter-rouge">/usr/sbin/cntlm -v -c cntlm.conf -M "https://www.google.com"</code>.
The <code class="language-plaintext highlighter-rouge">-M</code> flag generates the token for you and then you can copy the resulting Auth and PassNTLMv2 into your config. You do need to set the parent proxy beforehand.</p>
<p>When you want to use the proxy, run: <code class="language-plaintext highlighter-rouge">/usr/sbin/cntlm -v -c cntlm.conf</code> or when using the startup daemon, activate your settings by reloading them with <code class="language-plaintext highlighter-rouge">sudo service cntlm restart</code>.</p>
<p>Your proxy is now available on <code class="language-plaintext highlighter-rouge">http://localhost:3128</code>, only accessible from localhost, and you can use it for everything to connect to the internet without supplying the cumbersome credentials.</p>
<p>Common shell variables to let (command line) programs use the proxy (such as mvn, git, etc) can be set in <code class="language-plaintext highlighter-rouge">.bashrc</code> for example:</p>
<div class="language-bash highlighter-rouge"><div class="highlight"><pre class="highlight"><code><span class="nb">export </span><span class="nv">http_proxy</span><span class="o">=</span><span class="s2">"http://</span><span class="nv">$proxy</span><span class="s2">:</span><span class="nv">$proxyport</span><span class="s2">"</span>
<span class="nb">export </span><span class="nv">https_proxy</span><span class="o">=</span><span class="nv">$http_proxy</span>
<span class="nb">export </span><span class="nv">ftp_proxy</span><span class="o">=</span><span class="nv">$http_proxy</span>
<span class="nb">export </span><span class="nv">rsync_proxy</span><span class="o">=</span><span class="nv">$http_proxy</span>
<span class="nb">export </span><span class="nv">no_proxy</span><span class="o">=</span><span class="s2">"localhost,127.0.0.1,*.local,*.intranet,169.254/16"</span>
<span class="nb">export </span><span class="nv">ANT_OPTS</span><span class="o">=</span><span class="s2">"-Dhttp.proxyHost </span><span class="nv">$proxy</span><span class="s2"> -Dhttp.proxyPort </span><span class="nv">$proxyport</span><span class="s2">"</span>
<span class="nb">export </span><span class="nv">npm_config_proxy</span><span class="o">=</span><span class="nv">$http_proxy</span>
<span class="nb">export </span><span class="nv">npm_config_https_proxy</span><span class="o">=</span><span class="nv">$https_proxy</span>
<span class="nb">export </span><span class="nv">JAVA_OPTS</span><span class="o">=</span><span class="nv">$ANT_OPTS</span>
<span class="nb">export </span><span class="nv">SBT_OPTS</span><span class="o">=</span><span class="nv">$JAVA_OPTS</span>
git config <span class="nt">--global</span> http.proxy http://<span class="nv">$proxy</span>:<span class="nv">$proxyport</span>
</code></pre></div></div>
Setting Up A New Blog Using Github Pages2014-03-10T00:00:00+00:00https://blog.timmybankers.nl/2014/03/10/Setting-up-a-new-blog-using-Github-Pages
<h2 id="a-new-blog">A new blog</h2>
<p>A long time ago I started out with a self-hosted Wordpress blog. It took some time to setup, but in the end I got it working on my server at home and was available together with some other sites through Apache.</p>
<p>It took a lot of time to correctly set up Wordpress having all the features I would want: some kind of post drafting and publication, a simple and easy web site, a good uptime.
Unfortunately the blog died a cold dead a while ago. When setting up a new blog I wanted more or less the same functionality and was about to setup a new self-hosted Wordpress blog when I first looked into alternatives.</p>
<p>My eye fell upon <a href="https://pages.github.com/">GitHub pages</a>, which is a <strong>free</strong> hosted service from GitHub on which you can serve simple static web pages. As long as your pages are thus statically created it is a great place to host them with good availability. Luckily they also provide a way to generate the pages using <a href="https://jekyllrb.com/">Jekyll</a>, which is a blog-aware static page generator.
The blog itself is just a GitHub repository and the blog posts are files written in <a href="https://daringfireball.net/projects/markdown/">markdown</a>.</p>
<h2 id="getting-started">Getting started</h2>
<p>The process to get it up and running was easy:</p>
<ul>
<li>
<p>First create a repository on GitHub with you GitHub user: <em>user</em>.github.io (.com might also work).</p>
</li>
<li>
<p>Check out your Jeckyll template of choice (I chose <a href="https://github.com/plusjade/jekyll-bootstrap/">Jekyll Bootstrap</a>)</p>
<p><code class="language-plaintext highlighter-rouge">$ git clone https://github.com/plusjade/jekyll-bootstrap/</code></p>
</li>
<li>
<p>Attach it to your newly created repo</p>
<p><code class="language-plaintext highlighter-rouge">$ git remote set-url origin git@github.com:user/user.github.io</code></p>
</li>
<li>
<p>And push</p>
<p><code class="language-plaintext highlighter-rouge">$ git push</code></p>
</li>
</ul>
<p>Your new blog is up and running in about 10 minutes.</p>
<p>Optionally you can also let your own domain refer to it very easily:</p>
<ul>
<li>
<p>Add a file <code class="language-plaintext highlighter-rouge">CNAME</code> to the root of your repository with the exact URL in the body of the file such as: <code class="language-plaintext highlighter-rouge">blog.timmybankers.nl</code></p>
</li>
<li>
<p>Configure the DNS of your domain to let the same URL point to <em>user</em>.github.io using as you might have guessed <code class="language-plaintext highlighter-rouge">CNAME</code>.</p>
</li>
</ul>
<p>##Useful links</p>
<p>More useful links are:</p>
<ul>
<li><a href="https://prose.io/">https://prose.io/</a>: A simple web interface that you can use to edit your markdown blog posts in the browser.</li>
<li>themes: You can easily change the visual appearance of your blog by changing <a href="https://themes.jekyllbootstrap.com/">themes</a>.</li>
</ul>