λk.(k blog): Posts tagged 'tricks'urn:https-www-williamjbowman-com:-tags-tricks-html2021-05-13T20:08:26ZEnabling CORS for nginx WebDAV and CalDAV reverse-proxyurn:https-www-williamjbowman-com:-blog-2021-05-13-enabling-cors-for-nginx-webdav-and-caldav-reverse-proxy2021-05-13T20:08:26Z2021-05-13T20:08:26ZWilliam J. Bowman
<p>The past few weeks I’ve been learning to develop and deploy a Progress Web App (PWA) that can communicate with my WebDAV and CalDAV servers.
Unfortunately, while these are on the same domain, they are on different sub-domains, and this causes the requests to be considered cross-origin requests.
For security reasons, cross-origin requests are blocked by most browsers by default unless the server explicitly allows cross-origin resource sharing (<a name="(tech._cor)"></a><span style="font-style: italic">CORS</span>).
This is pretty easy to set up for static resources or scripts, if they use default headers and GET and POST methods.
However, it’s particularly complicated for WebDAV, CalDAV, and other protocols that use additional headers or methods.</p>
<!--more-->
<p></p>
<div class="SIntrapara">
<h1 class="fake-header">Table of Contents</h1>
</div>
<div class="SIntrapara">
<table cellpadding="0" cellspacing="0">
<tbody>
<tr>
<td>
<p><span class="hspace"> </span><a class="toptoclink" data-pltdoc="x" href="#%28part._.T.L.D.R%29">1<span class="hspace"> </span>TLDR</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"></span></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toptoclink" data-pltdoc="x" href="#%28part._.C.O.R.S_.Requests_and_.Responses%29">2<span class="hspace"> </span>CORS Requests and Responses</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Preflight%29">2.1<span class="hspace"> </span>Preflight</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Preflight_.Request%29">2.1.1<span class="hspace"> </span>Preflight Request</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Preflight_.Response%29">2.1.2<span class="hspace"> </span>Preflight Response</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Cross-.Origin_.Requests%29">2.2<span class="hspace"> </span>Cross-Origin Requests</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"></span></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toptoclink" data-pltdoc="x" href="#%28part._.Configuring_nginx%29">3<span class="hspace"> </span>Configuring nginx</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Configure_.Valid_.Cross-.Origin_.Hosts%29">3.1<span class="hspace"> </span>Configure Valid Cross-Origin Hosts</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Configure_.C.O.R.S_.Headers%29">3.2<span class="hspace"> </span>Configure CORS Headers</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Process_.C.O.R.S_.Requests%29">3.3<span class="hspace"> </span>Process CORS Requests</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"></span></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toptoclink" data-pltdoc="x" href="#%28part._.Conclusion_and_.Debugging%29">4<span class="hspace"> </span>Conclusion and Debugging</a></p></td></tr></tbody></table></div>
<h1>1
<tt> </tt><a name="(part._.T.L.D.R)"></a>TLDR</h1>
<p>Copy/paste/modify the below snippets into your <span class="stt">nginx.conf</span> in the correct places.
You’ll need to add the <span class="default"><code class="highlight-inline"><span class="k">map</span></code></span> declarations to <span class="default"><code class="highlight-inline"><span class="k">http</span></code></span> context, and merge the two <span class="default"><code class="highlight-inline"><span class="k">server</span></code></span> declarations into your WebDAV and CalDAV server configuration blocks.
You’ll also need to customize the safelist that sets <span class="default"><code class="highlight-inline"><span class="k">$cors_origin_header</span></code></span>, and possibly the <span class="default"><code class="highlight-inline"><span class="k">$cors_expose_headers</span></code></span> and <span class="default"><code class="highlight-inline"><span class="k">$cors_allow_headers</span></code></span> variables.</p>
<p></p>
<div class="SIntrapara"><a href="//resources/@|filename|">cors-nginx.conf</a></div>
<div class="SIntrapara">
<div class="brush: nginx">
<pre><code>http {
# .. in http context ..
# Declare the safe cross-origin hosts
map $http_origin $cors_origin_header {
hostnames;
default "https://example.com";
"https://example.com" "$http_origin";
"https://www.example.com" "$http_origin";
}
# Declare CORS exposed response headers
map $host $std_response_headers {
default "Content-Type, Content-Range, Content-Language, Date, Content-Length, Content-Encoding";
}
map $host $cache_control_response_headers {
default "Etag, Last-Modified";
}
map $host $dav_response_headers {
default "Dav";
}
map $host $cors_expose_headers {
default "${dav_response_headers}, ${std_response_headers}, ${cache_control_response_headers}";
}
# Declare CORS allowed request headers
map $host $std_request_headers {
default "Authorization, Origin, X-Requested-With, Range, Accept-Encoding, Content-Length, Content-Type";
}
map $host $dav_request_headers {
default "If-Match, If-None-Match, If-Modified-Since, Depth";
}
map $host $cors_allow_headers {
default "${dav_request_headers}, ${std_request_headers}";
}
# Detect a preflight request
map $http_access_control_request_headers $preflight_h {
default "true";
"" "false";
}
map $http_access_control_request_method $preflight_m {
default "true";
"" "false";
}
map $request_method $preflight {
default "false";
"OPTIONS" "${preflight_h}${preflight_m}true";
}
# Configure WebDAV
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name webdav.example.com;
location /.well-known/ {
root /srv/http/www;
}
# Advertise CORS access controls.
add_header "Access-Control-Allow-Origin" "$cors_origin_header" always;
add_header "Access-Control-Allow-Credentials" "true" always;
add_header "Access-Control-Expose-Headers" "$cors_expose_headers" always;
location / {
# Handle preflight request
if ($preflight = "truetruetrue"){
add_header "Access-Control-Allow-Origin" "$cors_origin_header";
add_header "Access-Control-Allow-Headers" "$cors_allow_headers";
add_header "Access-Control-Allow-Methods" "PROPFIND, COPY, MOVE, MKCOL, CONNECT, DELETE, DONE, GET, HEAD, OPTIONS, PATCH, POST, PUT";
add_header "Access-Control-Max-Age" 1728000;
add_header "Content-Type" "text/plain charset=UTF-8";
add_header "Content-Length" 0;
return 204;
}
auth_basic "Not currently available";
auth_basic_user_file /etc/nginx/htpasswd;
root /srv/http/webdav/data;
client_body_temp_path /tmp/nginx-webdav;
client_max_body_size 0;
dav_methods PUT DELETE MKCOL COPY MOVE;
dav_ext_methods PROPFIND OPTIONS;
create_full_put_path on;
dav_access user:rw group:r;
autoindex on;
}
}
# CalDAV and CardDAV
server {
listen 443 ssl http2;
listen [::]:443 ssl http2;
server_name caldav.example.com carddav.example.com;
location /.well-known/ {
root /srv/http/www;
}
location /.well-known/caldav {
return 301 https://caldav.example.com/;
}
location /.well-known/carddav {
return 301 https://carddav.example.com/;
}
add_header "Access-Control-Allow-Origin" "$cors_origin_header" always;
add_header "Access-Control-Allow-Credentials" "true" always;
add_header "Access-Control-Expose-Headers" "$cors_expose_headers" always;
location / {
if ($preflight = "truetruetrue"){
add_header "Access-Control-Allow-Origin" "$cors_origin_header";
add_header "Access-Control-Allow-Headers" "$cors_allow_headers";
add_header "Access-Control-Allow-Methods" "REPORT, PROPFIND, COPY, MOVE, MKCOL, CONNECT, DELETE, DONE, GET, HEAD, OPTIONS, PATCH, POST, PUT";
add_header "Access-Control-Max-Age" 1728000;
add_header "Content-Type" "text/plain charset=UTF-8";
add_header "Content-Length" 0;
return 204;
}
auth_basic "Not currently available";
auth_basic_user_file /etc/nginx/caldav/htpasswd;
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
proxy_pass_header Authorization;
proxy_pass http://127.0.0.1:5232/;
}
}
}</code></pre></div></div>
<h1>2
<tt> </tt><a name="(part._.C.O.R.S_.Requests_and_.Responses)"></a>CORS Requests and Responses</h1>
<h2>2.1
<tt> </tt><a name="(part._.Preflight)"></a>Preflight</h2>
<p>When a script running in a secure browser attempts to make a cross-origin request, the browser first send a <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a> (for non-trivial requests), and then sends the actual request if the server advertises that <a class="techoutside" data-pltdoc="x" href="#%28tech._cor%29"><span class="techinside">CORS</span></a> is enabled for that request.
A <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a> might be skipped for an HTTP <span class="stt">GET</span> method request, because this is considered harmless.</p>
<h3>2.1.1
<tt> </tt><a name="(part._.Preflight_.Request)"></a>Preflight Request</h3>
<p></p>
<div class="SIntrapara">Essentially, a <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a> is the browser asking the server for permission to make a request of a certain <span class="stt">METHOD</span> and then share the data and certain headers with a third party.
The <a name="(tech._preflight._request)"></a><span style="font-style: italic">preflight request</span> is an HTTP <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/OPTIONS"><span class="stt">OPTIONS</span></a> method request with the following headers set:
</div>
<div class="SIntrapara">
<ul>
<li>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Request-Headers"><span class="stt">Access-Control-Request-Headers</span></a>, which declares the headers that the cross-origin script is requesting.</p></li>
<li>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Request-Method"><span class="stt">Access-Control-Request-Method</span></a>, which declares the method of the request that the cross-origin script wants to send.</p></li>
<li>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Origin"><span class="stt">Origin</span></a>, which declares the domain of the origin of the script that wants to make a cross-origin request</p></li></ul></div>
<p>For HTTP servers serving static content or scripts that don’t use <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/OPTIONS"><span class="stt">OPTIONS</span></a>, it’s enough to detect an <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/OPTIONS"><span class="stt">OPTIONS</span></a> request, set the above headers, and return a 204 status code.
For some HTTP servers, like WebDAV and CalDAV, the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/OPTIONS"><span class="stt">OPTIONS</span></a> request has another use, and we really have to detect a
<font class="badlink"><span class="techoutside"><span class="techinside">preflight reuqest</span></span></font> by detecting both an <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/OPTIONS"><span class="stt">OPTIONS</span></a> request and the <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a> headers.</p>
<h3>2.1.2
<tt> </tt><a name="(part._.Preflight_.Response)"></a>Preflight Response</h3>
<p></p>
<div class="SIntrapara">To respond to a <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a>, the server is expected to reply with an empty content response, HTTP status code 204, and the following headers:
</div>
<div class="SIntrapara">
<ul>
<li>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin"><span class="stt">Access-Control-Allow-Origin</span></a>, which declares the hostnames that are allowed to make cross-origin requests. This ought to include the <span class="stt">Origin</span> of <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a> for the <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a> to succeed, and can be a wildcard value <span class="stt">*</span>.</p></li>
<li>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers"><span class="stt">Access-Control-Allow-Headers</span></a>, which declares which headers are allowed to be part of the cross-origin request. These are all be HTTP <a class="techoutside" data-pltdoc="x" href="#%28tech._request._header%29"><span class="techinside">request headers</span></a>.</p></li>
<li>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Methods"><span class="stt">Access-Control-Allow-Methods</span></a>, which declares which HTTP methods are allowed as part of cross-origin request.</p></li>
<li>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Max-Age"><span class="stt">Access-Control-Max-Age</span></a>, an optional header that declares how long this response to a <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a> can be cached;</p></li></ul></div>
<div class="SIntrapara">The 204 status code declares a success with no content.
An HTTP <a name="(tech._request._header)"></a><span style="font-style: italic">request header</span> is one that originates from the client and is part of a request from the client.
</div>
<div class="SIntrapara">
<blockquote class="refpara">
<blockquote class="refcolumn">
<blockquote class="refcontent">
<p>See <a href="https://developer.mozilla.org/en-US/docs/Glossary/Request_header"><span class="url">https://developer.mozilla.org/en-US/docs/Glossary/Request_header</span></a> for more.</p></blockquote></blockquote></blockquote></div>
<p>Some browsers (such as Firefox, and Chromium) will consider the <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a> as succeeding if the above headers are present, even if the status code is not 204, and even if the request contains other data.</p>
<p></p>
<div class="SIntrapara">Sme headers are part of the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers"><span class="stt">Access-Control-Allow-Headers</span></a> by default, as they are considered safe.
</div>
<div class="SIntrapara">
<blockquote class="refpara">
<blockquote class="refcolumn">
<blockquote class="refcontent">
<p>See <a href="https://developer.mozilla.org/en-US/docs/Glossary/CORS-safelisted_response_header"><span class="url">https://developer.mozilla.org/en-US/docs/Glossary/CORS-safelisted_response_header</span></a> for more details.</p></blockquote></blockquote></blockquote></div>
<p></p>
<div class="SIntrapara">Figuring out exactly which headers to list in <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Headers"><span class="stt">Access-Control-Allow-Headers</span></a> is a little annoying.
For my WebDAV (nginx) and CalDAV (radicale) servers, the following list seemed sufficient for my uses:
</div>
<div class="SIntrapara">
<ul>
<li>
<p>The following standard headers:
<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Authorization"><span class="stt">Authorization</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Origin"><span class="stt">Origin</span></a>, <span class="stt">X-Requested-With</span>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Range"><span class="stt">Range</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Accept-Encoding"><span class="stt">Accept-Encoding</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Length"><span class="stt">Content-Length</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type"><span class="stt">Content-Type</span></a>;</p></li>
<li>
<p>The following DAV headers: <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-Match"><span class="stt">If-Match</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-None-Match"><span class="stt">If-None-Match</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/If-Modified-Since"><span class="stt">If-Modified-Since</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Depth"><span class="stt">Depth</span></a></p></li></ul></div>
<div class="SIntrapara">This will depend on exactly what web app is communicating with the server and what it relies on, what the underlying server is.
You may need to do a bunch of testing in the web developer’s console to figure it out.</div>
<p>Similarly, figuring out exactly which methods to list in <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Methods"><span class="stt">Access-Control-Allow-Methods</span></a> depends on the app and server (but not the browser).
These methods are probably more well-specified.
For WebDAV and CalDAV, the following were sufficient:
<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/REPORT"><span class="stt">REPORT</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/PROPFIND"><span class="stt">PROPFIND</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/COPY"><span class="stt">COPY</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/MOVE"><span class="stt">MOVE</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/MKCOL"><span class="stt">MKCOL</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/CONNECT"><span class="stt">CONNECT</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/DELETE"><span class="stt">DELETE</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/DONE"><span class="stt">DONE</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/GET"><span class="stt">GET</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/HEAD"><span class="stt">HEAD</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/OPTIONS"><span class="stt">OPTIONS</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/PATCH"><span class="stt">PATCH</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/POST"><span class="stt">POST</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/PUT"><span class="stt">PUT</span></a>.</p>
<h2>2.2
<tt> </tt><a name="(part._.Cross-.Origin_.Requests)"></a>Cross-Origin Requests</h2>
<p></p>
<div class="SIntrapara">After a <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a>, the browser will start sending cross-origin HTTP requests.
These will be normal HTTP requests, but the browser will expect the following additional headers in the response:
</div>
<div class="SIntrapara">
<ul>
<li>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Origin"><span class="stt">Access-Control-Allow-Origin</span></a>, which declares the hostnames of cross-origin scripts that this response can be shared with.</p></li>
<li>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Credentials"><span class="stt">Access-Control-Allow-Credentials</span></a>, which is either "true" or "false", and declares whether the authorization information in this response can be shared with cross-origin scripts.</p></li>
<li>
<p><a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Expose-Headers"><span class="stt">Access-Control-Expose-Headers</span></a>, which declares which HTTP <a class="techoutside" data-pltdoc="x" href="#%28tech._response._header%29"><span class="techinside">response headers</span></a> can be exposed to the cross-origin script.</p></li></ul></div>
<div class="SIntrapara">An HTTP <a name="(tech._response._header)"></a><span style="font-style: italic">response header</span> is one that originates from the server and is part of a response from the server. </div>
<div class="SIntrapara">
<blockquote class="refpara">
<blockquote class="refcolumn">
<blockquote class="refcontent">
<p>See <a href="https://developer.mozilla.org/en-US/docs/Glossary/Response_header"><span class="url">https://developer.mozilla.org/en-US/docs/Glossary/Response_header</span></a> for more.</p></blockquote></blockquote></blockquote></div>
<p></p>
<div class="SIntrapara">For my WebDAV and CalDAV servers, I needed to expose via <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Expose-Headers"><span class="stt">Access-Control-Expose-Headers</span></a> the following for my uses:
</div>
<div class="SIntrapara">
<ul>
<li>
<p>The following standard <a class="techoutside" data-pltdoc="x" href="#%28tech._response._header%29"><span class="techinside">response headers</span></a>:
<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Type"><span class="stt">Content-Type</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Range"><span class="stt">Content-Range</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Language"><span class="stt">Content-Language</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Date"><span class="stt">Date</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Length"><span class="stt">Content-Length</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Content-Encoding"><span class="stt">Content-Encoding</span></a>;</p></li>
<li>
<p>The following <a class="techoutside" data-pltdoc="x" href="#%28tech._response._header%29"><span class="techinside">response headers</span></a> that have to do with cache control:
<a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Etag"><span class="stt">Etag</span></a>, <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified"><span class="stt">Last-Modified</span></a>.
You may want to add <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Pragma"><span class="stt">Pragma</span></a> if you support HTTP 1.0, and <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Cache-Control"><span class="stt">Cache-Control</span></a> and <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Expires"><span class="stt">Expires</span></a> if your server needs to direct your app about cache expiration. <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Etag"><span class="stt">Etag</span></a> and <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Last-Modified"><span class="stt">Last-Modified</span></a> were sufficient for detecting changes between the local and remote versions of DAV files in my app.</p></li>
<li>
<p>The following DAV-specific headers: <a href="http://www.webdav.org/specs/rfc2518.html#HEADER_DAV">DAV</a>.</p></li></ul></div>
<h1>3
<tt> </tt><a name="(part._.Configuring_nginx)"></a>Configuring nginx</h1>
<p>Configuring <span class="stt">nginx</span> correctly is tricky due to the design of the <span class="stt">nginx</span> configuration language.
It is a declarative language, but can look imperative and trip us up.
We have to be careful in how we conditionally add headers and process requests.</p>
<p><span class="stt">nginx</span> also doesn’t allow us to use <span class="default"><code class="highlight-inline"><span class="k">set</span></code></span> to create variables in all contexts, so we have to be a little clever at times.</p>
<h2>3.1
<tt> </tt><a name="(part._.Configure_.Valid_.Cross-.Origin_.Hosts)"></a>Configure Valid Cross-Origin Hosts</h2>
<p></p>
<div class="SIntrapara">To limit which domains can issue a cross-origin request, we create a safelist and set a variable based on the <span class="stt">Origin</span> header of the request.
We use <span class="default"><code class="highlight-inline"><span class="k">map</span></code></span> to declare the variable <span class="default"><code class="highlight-inline"><span class="k">$cors_origin_header</span></code></span> to be the origin, if the origin is on the safelist.
</div>
<div class="SIntrapara">
<blockquote class="refpara">
<blockquote class="refcolumn">
<blockquote class="refcontent">
<p>See <a href="http://nginx.org/en/docs/http/ngx_http_map_module.html#map"><span class="url">http://nginx.org/en/docs/http/ngx_http_map_module.html#map</span></a> for more.</p></blockquote></blockquote></blockquote></div>
<div class="brush: nginx">
<pre><code>map $http_origin $cors_origin_header {
hostnames;
default "https://example.com";
"https://example.com" "$http_origin";
"https://www.example.com" "$http_origin";
}</code></pre></div>
<p>In this safelist, we allow cross-origin requests from <span class="default"><code class="highlight-inline"><span class="k">https://examples.com</span></code></span> and <span class="default"><code class="highlight-inline"><span class="k">https://www.examples.com</span></code></span>, but no other hosts.
We could use the wildcard <span class="default"><code class="highlight-inline"><span class="k">"*"</span></code></span> to allow requests from anyone.</p>
<h2>3.2
<tt> </tt><a name="(part._.Configure_.C.O.R.S_.Headers)"></a>Configure CORS Headers</h2>
<p>In <span class="default"><code class="highlight-inline"><span class="k">http</span></code></span> context, I use the following <span class="default"><code class="highlight-inline"><span class="k">map</span></code></span>s to declares the <a class="techoutside" data-pltdoc="x" href="#%28tech._cor%29"><span class="techinside">CORS</span></a> request and response headers.
This is an abuse of <span class="default"><code class="highlight-inline"><span class="k">map</span></code></span> to give us the ability do define variable in <span class="default"><code class="highlight-inline"><span class="k">http</span></code></span> context, since <span class="default"><code class="highlight-inline"><span class="k">set</span></code></span> doesn’t work in <span class="default"><code class="highlight-inline"><span class="k">http</span></code></span> context.</p>
<p>You’re free to inline these header values later, but separating them out into these variables made them easier to reuse in both the WebDAV and CalDAV servers.</p>
<div class="brush: nginx">
<pre><code># Declare allowed CORS Expose Headers; each is an HTTP response header.
map $host $std_response_headers {
default "Content-Type, Content-Range, Content-Language, Date, Content-Length, Content-Encoding";
}
map $host $cache_control_response_headers {
default "Etag, Last-Modified";
}
map $host $dav_response_headers {
default "DAV";
}
map $host $cors_expose_headers {
default "${dav_response_headers}, ${std_response_headers}, ${cache_control_response_headers}";
}
# Declare allowed CORS Request Headers; each is an http request header.
map $host $std_request_headers {
default "Authorization, Origin, X-Requested-With, Range, Accept-Encoding, Content-Length, Content-Type";
}
map $host $dav_request_headers {
default "If-Match, If-None-Match, If-Modified-Since, Depth";
}
map $host $cors_allow_headers {
default "${dav_request_headers}, ${std_request_headers}";
}</code></pre></div>
<h2>3.3
<tt> </tt><a name="(part._.Process_.C.O.R.S_.Requests)"></a>Process CORS Requests</h2>
<p>Next, we need to detect a <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a>.
We might be tempted to use <span class="default"><code class="highlight-inline"><span class="k">if</span></code></span>, but remember: <a href="https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/">If is Evil</a>, so we want to avoid it.</p>
<p></p>
<div class="SIntrapara">Instead, we’re going to use <span class="default"><code class="highlight-inline"><span class="k">map</span></code></span> to create a variable that is equal to <span class="default"><code class="highlight-inline"><span class="k">"truetruetrue"</span></code></span> if and only if we detect a <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a>.
This time, we’re using <span class="default"><code class="highlight-inline"><span class="k">map</span></code></span> as intended, to conditionally define variables.
</div>
<div class="SIntrapara">
<div class="brush: nginx">
<pre><code>map $http_origin $cors_origin_header {
hostnames;
default "https://example.com";
"https://example.com" "$http_origin";
"https://www.example.com" "$http_origin";
}
map $http_access_control_request_headers $preflight_h {
default "true";
"" "false";
}
map $http_access_control_request_method $preflight_m {
default "true";
"" "false";
}
map $request_method $preflight {
default "false";
"OPTIONS" "${preflight_h}${preflight_m}true";
}</code></pre></div></div>
<div class="SIntrapara">We set the value of <span class="default"><code class="highlight-inline"><span class="k">$preflight</span></code></span> to <span class="default"><code class="highlight-inline"><span class="k">"truetruetrue"</span></code></span> when we detect a (non-empty) <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Request-Headers"><span class="stt">Access-Control-Request-Headers</span></a> header, a (non-empty) <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Request-Method"><span class="stt">Access-Control-Request-Method</span></a>, and the request method is <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Methods/OPTIONS"><span class="stt">OPTIONS</span></a>.
We set the variables through string concatination to emulate boolean <span class="stt">and</span>, since <span class="stt">nginx</span> does not support nested conditions or boolean arithmetic.</div>
<p></p>
<div class="SIntrapara">To actually detect and process a <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a>, we add the following code in <span class="default"><code class="highlight-inline"><span class="k">location</span></code></span> context in the <span class="default"><code class="highlight-inline"><span class="k">server</span></code></span> on which you want to enable <a class="techoutside" data-pltdoc="x" href="#%28tech._cor%29"><span class="techinside">CORS</span></a>.
I add it in the <span class="default"><code class="highlight-inline"><span class="k">location</span><span class="w"> </span><span class="s">/</span></code></span> block of both my WebDAV and CalDAV <span class="default"><code class="highlight-inline"><span class="k">server</span></code></span> blocks.
</div>
<div class="SIntrapara">
<div class="brush: nginx">
<pre><code>if ($preflight = "truetruetrue"){
add_header "Access-Control-Allow-Origin" "$cors_origin_header";
add_header "Access-Control-Allow-Headers" "$cors_allow_headers";
add_header "Access-Control-Allow-Methods" "REPORT, PROPFIND, COPY, MOVE, MKCOL, CONNECT, DELETE, DONE, GET, HEAD, OPTIONS, PATCH, POST, PUT";
add_header "Access-Control-Max-Age" 1728000;
add_header "Content-Type" "text/plain charset=UTF-8";
add_header "Content-Length" 0;
return 204;
}</code></pre></div></div>
<div class="SIntrapara">Note that due to limitations on <span class="default"><code class="highlight-inline"><span class="k">add_header</span></code></span>, this <span class="default"><code class="highlight-inline"><span class="k">if</span></code></span> block <span class="emph">must</span> appear in <span class="default"><code class="highlight-inline"><span class="k">location</span></code></span> context.
Note that we also cannot move any <span class="default"><code class="highlight-inline"><span class="k">add_header</span></code></span> command outside the <span class="default"><code class="highlight-inline"><span class="k">if</span></code></span>.
The <span class="default"><code class="highlight-inline"><span class="k">add_header</span></code></span> commands are not executed in a sequential order, but all of them are "executed" simultaneously as a block at the current level.
</div>
<div class="SIntrapara">
<blockquote class="refpara">
<blockquote class="refcolumn">
<blockquote class="refcontent">
<p>See <a href="http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header"><span class="url">http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header</span></a>.</p></blockquote></blockquote></blockquote></div>
<div class="SIntrapara">Note also that this <span class="default"><code class="highlight-inline"><span class="k">if</span></code></span> <span class="emph">must</span> end in <span class="default"><code class="highlight-inline"><span class="k">return</span><span class="w"> </span><span class="mi">204</span></code></span>.
This is part of the <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a> response (although some browsers will let you get away without it), and necessary for <span class="default"><code class="highlight-inline"><span class="k">if</span></code></span> to behave correctly, since <a href="https://www.nginx.com/resources/wiki/start/topics/depth/ifisevil/">If is Evil</a>.</div>
<p>You can customize the <a href="https://developer.mozilla.org/en-US/docs/Web/HTTP/Headers/Access-Control-Allow-Methods"><span class="stt">Access-Control-Allow-Methods</span></a> header depending on the server and your app to provide the least privilege.</p>
<p></p>
<div class="SIntrapara">Finally, we add the headers for other cross-origin requests.
We add the following in any valid context, except the <span class="default"><code class="highlight-inline"><span class="k">if</span></code></span> body for the <a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight request</span></a>.
I added them in <span class="default"><code class="highlight-inline"><span class="k">server</span></code></span> context.
</div>
<div class="SIntrapara">
<div class="brush: nginx">
<pre><code>add_header "Access-Control-Allow-Origin" "$cors_origin_header" always;
add_header "Access-Control-Allow-Credentials" "true" always;
add_header "Access-Control-Expose-Headers" "$cors_expose_headers" always;</code></pre></div></div>
<div class="SIntrapara">Note that the <span class="default"><code class="highlight-inline"><span class="k">always</span></code></span> argument is required for non-<a class="techoutside" data-pltdoc="x" href="#%28tech._preflight._request%29"><span class="techinside">preflight requests</span></a>, since the HTTP response codes for successful requests will be variously 207, 200, and 304 (maybe others), and the <span class="default"><code class="highlight-inline"><span class="k">add_header</span></code></span> does not actually add a header for responses with some of these status codes.
</div>
<div class="SIntrapara">
<blockquote class="refpara">
<blockquote class="refcolumn">
<blockquote class="refcontent">
<p>See <a href="http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header"><span class="url">http://nginx.org/en/docs/http/ngx_http_headers_module.html#add_header</span></a> for more details.</p></blockquote></blockquote></blockquote></div>
<h1>4
<tt> </tt><a name="(part._.Conclusion_and_.Debugging)"></a>Conclusion and Debugging</h1>
<p>Now, if you look in the Network Monitor of your browser (Ctrl+Shift+E), and click "XHR", you should see some successful cross-origin requests from your web app.
If you see they’re being rejected, try anaylzing the request, and changing the above configurations with additional headers or safelisted origins.</p>Setting up your backup serviceurn:https-www-williamjbowman-com:-blog-2020-06-30-setting-up-your-backup-service2020-06-30T20:54:20Z2020-06-30T20:54:20ZWilliam J. Bowman
<p>I just ran the command <span class="stt">rm -rf ~</span>, deleting all my personal files in the process.
This was not the first time, and it was no big deal, because I back up my files
with automatic rolling backups.
My backup system is secure, redundant, and has low resources requirements.
The backup repository is encrypted, deduplicated, compressed, and mirrored
across multiple machines.
You can choose to use any or none of these features while following this guide.</p>
<p>In this guide, I describe how to set up a secure and robust backup service
yourself, which runs on Linux, macOS, and Windows via WSL 2.
I provide my own scripts, config files, and workflows for maintaining,
validating, and restoring the backups.
This is all setup using free software, supports multiple configurations with
varying degrees of security and redundancy, and scales well to more backup
clients.</p>
<p>If you’d prefer to not set this up yourself and you run macOS or Windows, I
recommend Backblaze:</p>
<blockquote>
<p><a href="https://www.backblaze.com/cloud-backup.html#af9v9g"><span class="url">https://www.backblaze.com/cloud-backup.html#af9v9g</span></a></p></blockquote>
<p>They automatically handle everything, including most of the features I want in a
backup service and some I could never implement myself, for $6/m per machine
(USD).</p>
<!--more-->
<p></p>
<div class="SIntrapara">
<h1 class="fake-header">Table of Contents</h1>
</div>
<div class="SIntrapara">
<table cellpadding="0" cellspacing="0">
<tbody>
<tr>
<td>
<p><span class="hspace"> </span><a class="toptoclink" data-pltdoc="x" href="#%28part._sec~3aintro%29">1<span class="hspace"> </span>Introduction</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"></span></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toptoclink" data-pltdoc="x" href="#%28part._sec~3aprereq%29">2<span class="hspace"> </span>Install Prerequisite Software</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Backup_.Software%29">2.1<span class="hspace"> </span>Backup Software</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Optional_.G.U.I_for_.Client%29">2.2<span class="hspace"> </span>Optional GUI for Client</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Mirror_.Software%29">2.3<span class="hspace"> </span>Mirror Software</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"></span></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toptoclink" data-pltdoc="x" href="#%28part._sec~3ainit%29">3<span class="hspace"> </span>Initialize the Backup Repository</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Setup_.Server_.Environment%29">3.1<span class="hspace"> </span>Setup Server Environment</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Setup_.Client-.Only_.Environment%29">3.2<span class="hspace"> </span>Setup Client-Only Environment</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Create_the_.Encrypted_.Repository%29">3.3<span class="hspace"> </span>Create the Encrypted Repository</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._sec~3ainit~3async-client-only%29">3.4<span class="hspace"> </span>Mirror the Client-Only Repository Offsite</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"></span></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toptoclink" data-pltdoc="x" href="#%28part._sec~3aconfig-client%29">4<span class="hspace"> </span>Configure the Backup Client</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Install_.Backup_.Script%29">4.1<span class="hspace"> </span>Install Backup Script</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._sec~3aconfig-client~3aexclude%29">4.2<span class="hspace"> </span>Exclude Extraneous Files From Backup</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Configure_.Access_to_the_.Backup_.Repository%29">4.3<span class="hspace"> </span>Configure Access to the Backup Repository</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Client-only_.Repository_.Folder%29">4.3.1<span class="hspace"> </span>Client-only Repository Folder</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Backup_.Server_via_.S.S.H%29">4.3.2<span class="hspace"> </span>Backup Server via SSH</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Least_.Priviledge_for_.Client_.S.S.H_.Key%29">4.3.3<span class="hspace"> </span>Least Priviledge for Client SSH Key</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"></span></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toptoclink" data-pltdoc="x" href="#%28part._sec~3amirrors%29">5<span class="hspace"> </span>Configure Mirrors</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Least_.Priviledge_for_.Mirrors%29">5.1<span class="hspace"> </span>Least Priviledge for Mirrors</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"></span></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toptoclink" data-pltdoc="x" href="#%28part._sec~3amonitor%29">6<span class="hspace"> </span>Monitor and Check Backups</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Check_.Backups_are_.Happening%29">6.1<span class="hspace"> </span>Check Backups are Happening</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Integrity_.Check_the_.Repository%29">6.2<span class="hspace"> </span>Integrity Check the Repository</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Prune_.Expired_.Snapshots%29">6.3<span class="hspace"> </span>Prune Expired Snapshots</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toclink" data-pltdoc="x" href="#%28part._.Finding_.Large_.Extraneous_.Files_in_the_.Repository%29">6.4<span class="hspace"> </span>Finding Large Extraneous Files in the Repository</a></p></td></tr>
<tr>
<td>
<p><span class="hspace"></span></p></td></tr>
<tr>
<td>
<p><span class="hspace"> </span><a class="toptoclink" data-pltdoc="x" href="#%28part._.Restore_from_.Backups%29">7<span class="hspace"> </span>Restore from Backups</a></p></td></tr></tbody></table></div>
<h1>1
<tt> </tt><a name="(part._sec~3aintro)"></a>Introduction</h1>
<p>This guide will help you set up a backup system that automatically records hourly
snapshots, compresses, deduplicates, and encrypts them, enabling a very robust
and secure backup system that takes up very little drive space.
For example, I four machines backed up with 2.5TB of snapshots stored in 21GB of
space, mirrored on machines in multiple locations.
It would take an extraordinary event for me to lose data.
I’ve successfully recovered GBs of data usually resulting from my own stupidity,
and occasionally the result of various tools corrupting files or the whole
filesystem.</p>
<p>I describe two main configuration options: (1) client-only, which requires only a
single machine but relies on an external service for saving the backups
offsite; or (2) a client/server approach that requires access to an
always-on server but offers more redundancy.
Within these two main configurations, I describe additional configuration
measures, such as setting up offsite mirrors for the backup repository,
implementing principles of least priviledge to restrict remote access while
still automating backups.</p>
<p>At the end, you too will be able to (but probably shouldn’t) use <span class="stt">rm -rf</span>
without fear, among other benefits.</p>
<h1>2
<tt> </tt><a name="(part._sec~3aprereq)"></a>Install Prerequisite Software</h1>
<h2>2.1
<tt> </tt><a name="(part._.Backup_.Software)"></a>Backup Software</h2>
<p></p>
<div class="SIntrapara">The main backup software is <span class="stt">borg</span>.
</div>
<div class="SIntrapara">
<blockquote>
<p><a href="https://borgbackup.readthedocs.io/en/stable/index.html"><span class="url">https://borgbackup.readthedocs.io/en/stable/index.html</span></a></p></blockquote></div>
<p><span class="stt">borg</span> features automatic compression, deduplication, encryption.
It also supports an on-demand backup server via SSH, useful file exclusion
methods, and filtering/recreating backup archives for when you realize you
backed up something that you didn’t need to and it’s taking up too much space.
These features and its superb documentation and easy of use have made it better
than every other tool I’ve tried.</p>
<p>Install this on the server and all clients.</p>
<p></p>
<div class="SIntrapara">For example, on Arch:
</div>
<div class="SIntrapara">
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">pacman -S borg</span></p></td></tr></tbody></table></div></div>
<p></p>
<div class="SIntrapara">Or macOS:
</div>
<div class="SIntrapara">
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">brew cask install borgbackup</span></p></td></tr></tbody></table></div></div>
<h2>2.2
<tt> </tt><a name="(part._.Optional_.G.U.I_for_.Client)"></a>Optional GUI for Client</h2>
<p></p>
<div class="SIntrapara"><span class="stt">borg</span> has an optional, third-party (still free software) GUI you can
install called <span class="stt">vorta</span>.
</div>
<div class="SIntrapara">
<blockquote>
<p><a href="https://vorta.borgbase.com/"><span class="url">https://vorta.borgbase.com/</span></a></p></blockquote></div>
<p>If you’re uncomfortable with commandline nonsense, you can to use this on
the clients to configure most of what I describe about below.
I haven’t used it myself, so you’ll need to figure out the translation from each
concept and my scripts to the equivalent in the GUI.
The GUIs looks pretty discoverable, though, so this shouldn’t be hard.</p>
<h2>2.3
<tt> </tt><a name="(part._.Mirror_.Software)"></a>Mirror Software</h2>
<p>To make redundant mirrors of your backup repository offsite, you’ll need a tool
to synchronize the repository to the mirrors.
I own several machines, and treat all of them as mirrors for maximum redundancy
without relying on cloud services.</p>
<p></p>
<div class="SIntrapara">I recommend <span class="stt">rclone</span> for this, but alternatives like <span class="stt">rsync</span> or
<a href="https://github.com/bcpierce00/unison"><span class="stt">unison</span></a> work well too.
</div>
<div class="SIntrapara">
<blockquote>
<p><a href="https://rclone.org/"><span class="url">https://rclone.org/</span></a></p></blockquote></div>
<p><span class="stt">rclone</span> provides <span class="stt">rsync</span> like capabilities, but also performs local
caching to speed up the computing the delta to be transfered, supports
various cloud storage backends, in case you want to sync to ~the cloud~.</p>
<p>Install this on all mirrors.</p>
<p></p>
<div class="SIntrapara">Arch:
</div>
<div class="SIntrapara">
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">pacman -S rclone</span></p></td></tr></tbody></table></div></div>
<p></p>
<div class="SIntrapara">macOS:
</div>
<div class="SIntrapara">
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">brew install rclone</span></p></td></tr></tbody></table></div></div>
<p>If you’re using a client-only configuration, you can also install this on the
client if you wish to synchronize the local repository to a cloud service or
secondary machine.
However, unless your cloud service features strong and easy to use version
control, I recommend installing <span class="stt">git</span> instead, as there are some downsides
to a client automatically synchronizing a local backup repository without
version control.
I discuss this in <a data-pltdoc="x" href="#%28part._sec~3ainit~3async-client-only%29">Mirror the Client-Only Repository Offsite</a>.</p>
<h1>3
<tt> </tt><a name="(part._sec~3ainit)"></a>Initialize the Backup Repository</h1>
<h2>3.1
<tt> </tt><a name="(part._.Setup_.Server_.Environment)"></a>Setup Server Environment</h2>
<p>For the client/server model, the backup server needs:</p>
<ol>
<li>
<p>A name or fixed IP address. I call this <span class="stt">backup-server.tld</span>.</p></li>
<li>
<p>An SSH daemon.</p></li>
<li>
<p>A user with SSH access, permission to execute <span class="stt">borg</span>, and shell access.
I’ll call this user <span class="stt">backupd</span>.</p></li>
<li>
<p>A folder this user owns to store the backup repository.
I call this folder <span class="stt">~/backups</span> (meaning <span class="stt">~backupd/backups</span>).</p></li></ol>
<h2>3.2
<tt> </tt><a name="(part._.Setup_.Client-.Only_.Environment)"></a>Setup Client-Only Environment</h2>
<p>For the client-only model, you only need a folder that the client has read/write
access to.
I’ll call this folder <span class="stt">~/backups</span>, and call client user <span class="stt">client-user</span>.</p>
<h2>3.3
<tt> </tt><a name="(part._.Create_the_.Encrypted_.Repository)"></a>Create the Encrypted Repository</h2>
<p>Next we need to initialize the backup repository with an encryption key.
The backup repository is encrypted at-rest.</p>
<p>Run the following command.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">borg init -e repokey ~/backups</span></p></td></tr></tbody></table></div>
<p>You’ll be prompted for a password.</p>
<p>I strongly recommend storing the password in a password manager.
<span class="stt">borg</span> can automatically read from the password manager using the environment
variable <span class="stt">BORG_PASSCOMMAND</span>.
For example, I use <a href="https://www.passwordstore.org/"><span class="stt">pass</span></a> as
my password manager, and set <span class="stt">BORG_PASSCOMMAND="pass show
backup-server.tld/borg"</span>, which in turn causes <span class="stt">gpg-agent</span> to query me or
my login keychain for the master password.</p>
<p>You can also set the password as a string the environment variable
<span class="stt">BORG_PASSPHRASE</span>.
For example, if you’re password is "password", you can set
<span class="stt">BORG_PASSPHRASE="password"</span>.
You should not do this if the environment variable is stored in a plaintext
file.</p>
<p>There are several other initialization options which you can explore if you want
to customize encryption levels, disable encryption (don’t do it!), or optimize
for hardware acceleration, but I’m happy with the default.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">borg init --help</span></p></td></tr></tbody></table></div>
<h2>3.4
<tt> </tt><a name="(part._sec~3ainit~3async-client-only)"></a>Mirror the Client-Only Repository Offsite</h2>
<p>If you do not have a backup server, we need to set up at least one mirror.
We need to make sure the local backup repository is stored somewhere
else in the event of a total data loss locally (<span class="emph">e.g.,</span> a stolen laptop),
or a partial data loss that affects the backup repository itself (<span class="emph">e.g.,</span> a
corrupted drive).</p>
<p>Bad solutions include using a file synchronization service such as Dropbox,
Google Drive, or OneDrive as a mirror; or automatically synchronizing via rsync,
unison, or rclone to a secondary machine.
In the event of data loss, an automatic synchronization service could
overwrite the remote copy with a completely empty backup repository, totally
destroying your backups.
Some file-sync services will allow you to restore older versions of a file,
which mitigates some of this risk.
This is not a good solution unless you’re really sure of the version control.</p>
<p>An acceptable solution is to use a version-controlled file hosting service like
GitHub or GitLab to host your backup repository.
You can set up a cron job to automatically commit and push the backup repository
regularly, tagging each commit in the same way as the archives are tagged.
Ideally, the repository should be private, but since it’s encrypted, this is not
strictly required.
This exposes your data to more risk, as with sufficient resources, a dedicated
attacker (such as a corporation or government) could break the encryption.
However, such attackers probably aren’t targeting you, and if they are, you
might have bigger problems.</p>
<p>To use my suggested method, first make <span class="stt">~/backups</span> a git repo.
Run the following commands.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">cd ~/backups</span></p></td></tr>
<tr>
<td>
<p><span class="stt">git init</span></p></td></tr>
<tr>
<td>
<p><span class="stt">git checkout -b main</span></p></td></tr>
<tr>
<td>
<p><span class="stt">git add -A</span></p></td></tr>
<tr>
<td>
<p><span class="stt">git commit -m "Initilize repo"</span></p></td></tr></tbody></table></div>
<p>Next, add the remote repository:</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">git remote add -m main origin git@git-repo.tld:client-user/backup-repo.git</span></p></td></tr></tbody></table></div>
<p>Now add a cron job.
Run <span class="stt">crontab -e</span> and add the following line.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">@hourly /home/client-user/bin/sync-local-borg-repo.sh</span></p></td></tr></tbody></table></div>
<p>Finally, install the following script in <span class="stt">~/bin/</span> for the client:</p>
<p></p>
<div class="SIntrapara"><a href="//resources/@|filename|">sync-local-borg-repo.sh</a></div>
<div class="SIntrapara">
<div class="brush: shell">
<pre><code>#!/bin/sh
cd ~/backups
git add -A
git commit --fixup HEAD
git tag `hostname`+`date +"%Y-%m-%dT%H_%M_%S"`
git push origin main</code></pre></div></div>
<p>And make it executable: <span class="stt">chmod +x ~/bin/sync-local-borg-repo.sh</span>.</p>
<p>This method will use considerable client disk space, which is split between the
client and server in the client/server configuration.
I recommend your regularly prune the git repo, but only do so manually after
checking your backups (see <a data-pltdoc="x" href="#%28part._sec~3amonitor%29">Monitor and Check Backups</a>).
Setting up an automatic job to prune it risks deleting your backup repository in
the event of a data loss.
The commit option <span class="stt">--fixup HEAD</span> in line 5 makes this easy with the
following commands:</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">env EDITOR=true git rebase --root --autosquash -i</span></p></td></tr>
<tr>
<td>
<p><span class="stt">git gc</span></p></td></tr>
<tr>
<td>
<p><span class="stt">git push -f origin main</span></p></td></tr></tbody></table></div>
<p>This will squash the entire history of the repo and force push to the remote.
Losing the history is not a big deal, since the backup repository is actually
keeping hourly snapshots.
The git history is only for preventing synchronization from losing data if an
automatic push happens after a data loss.</p>
<h1>4
<tt> </tt><a name="(part._sec~3aconfig-client)"></a>Configure the Backup Client</h1>
<p>Each backup client needs:</p>
<ol>
<li>
<p>A user with read access to all files included in the backup.
I call this user <span class="stt">client-user</span>.
For me, this is my username on the client machine.
In some circumstances, I create a group, <span class="stt">backupg</span>, to give this user read
access to special files.</p></li>
<li>
<p>A cron daemon of some kind.</p></li></ol>
<p>To start the backup system, we need to add a script to run automatically backing
up files, and exclude any extraneous files.
I take the approach of including everything by default, and then manually
inspecting archives from time to time for large extraneous files and folders.</p>
<h2>4.1
<tt> </tt><a name="(part._.Install_.Backup_.Script)"></a>Install Backup Script</h2>
<p>I use the following script, which I set to run every hour.
Add the following cron job to <span class="stt">client-users</span>’s crontab by running
<span class="stt">crontab -e</span>, and adding:</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">@hourly /home/client-user/bin/borg-backup.sh</span></p></td></tr></tbody></table></div>
<p>Then install the following script in <span class="stt">~/bin/</span> for <span class="stt">client-user</span>.</p>
<p></p>
<div class="SIntrapara"><a href="//resources/@|filename|">borg-backup.sh</a></div>
<div class="SIntrapara">
<div class="brush: shell">
<pre><code>#!/bin/sh
## borg-backup.sh
## Usage:
# run `borg-backup.sh`
#
# Optional environment variable inputs:
# - TAG By default, the tag for the archive is set using the hostname of the
# client machine. To manually set a tag, set the environment variable
# `TAG` prior to running, e.g., `env TAG="manual-tag+"
# borg-backup.sh`.
# - WAIT The wait time in seconds to obtain a write lock on the repository from
# the server. By default, 600 seconds (10 minutes).
## Configuration
# Set to the location of the backup repository.
# Can be a remote directory, using SSH, or a local directory.
# Make sure the SSH agent and/or SSH key is readable by the backup daemon,
# and the remote location is accessible by a key in the ssh-agent or configured
# in .ssh/config.
#
# Example: REPO="backupd@backup-server.tld:backups"
# Example: REPO="~/backups"
REPO="borg-server:backups"
# Set the password or passcommand for encrypted repositories.
export BORG_PASSCOMMAND='pass show backup-server.tld/borg'
## Create auxiliary files to be part of the backup.
# Export the installed package list from the package manager, so it can be backed up.
mkdir -p /tmp/pacman-local/
echo "# Pipe to pakku -S to reinstall" > /tmp/pacman-local/pacman.lst
pacman -Qenq >> /tmp/pacman-local/pacman.lst
pacman -Qemq >> /tmp/pacman-local/pacman.lst
## Create a new backup archive.
# Add additional files to backup as needed.
borg create \
-C lzma,9 \
-c 60 \
--exclude-from ~/borg-exclude \
--exclude-if-present '.borg-ignore' \
--lock-wait ${WAIT:-600} \
$REPO::'{hostname}+'${TAG:-}'{now:%Y-%m-%dT%H:%M:%S}' \
/tmp/pacman-local/ \
/etc/sysctl.d \
/etc/modprobe.d \
/etc/makepkg.conf \
/etc/pacman.conf \
/etc/fstab \
/etc/X11 \
~/</code></pre></div></div>
<p>Make it executable with <span class="stt">chmod +x ~/bin/borg-backup.sh</span>.</p>
<p>There are two necessary configuration steps:</p>
<ul>
<li>
<p>Change the <span class="stt">REPO</span> variable to point to your backup repository.
If you’re using a client-only model, this is the path to the backup
repository <span class="stt">~/backups</span>.
If you’re using a server, you can enter the SSH address and path, or configure
the <span class="stt">.ssh/config</span> file as discussed later.</p></li>
<li>
<p>Change the <span class="stt">export BORG_PASSCOMMAND</span> to export your password manager
command, or change the line to <span class="stt">export BORG_PASSPHRASE</span> to export the
password string as described earlier.
You really shouldn’t use <span class="stt">BORG_PASSPHRASE</span> since this stores the password
in plaintext, but I suppose if your hard drive is encrypted, and the backup
script is only stored on the client, it’s probably fine. Ish.</p></li></ul>
<p>You’ll probably also want to change the list of files that are included in the
snapshot.
I include my list for reference, which assumes an Arch Linux machine and
includes some of my customized root config files.</p>
<p></p>
<div class="SIntrapara">The script is documented with its major features, but I’ll explain the
<span class="stt">borg</span> command in more detail.
</div>
<div class="SIntrapara">
<ul>
<li>
<p>The option <span class="stt">-C lzma,9</span> enables LZMA compression level 9 (maximum
compression).
This slows down archive creation but decreases the archive size substantially.
In my experience, my snapshots take about a minute to create and upload to
the server, so I’m fine with max compression.</p></li>
<li>
<p>The option <span class="stt">-c 60</span> tells <span class="stt">borg</span> to create a checkpoint every 60
seconds, saving a partial backup if the backup process is interrupted.
This can happen if you’re running on a laptop that goes to sleep in the
middle of the backup, for example.
I choose 60 seconds since most of my snapshots only take that long, so any
longer might indicate a real change to keep track of.</p></li>
<li>
<p>The option <span class="stt">--exclude-from ~/borg-exclude</span> excludes any files that match
the pattern specification found in the file <span class="stt">~/borg-exclude</span>.
I use this file to filter common files, such as compiler generated files.
I share this file in <a data-pltdoc="x" href="#%28part._sec~3aconfig-client~3aexclude%29">Exclude Extraneous Files From Backup</a>.</p></li>
<li>
<p>The option <span class="stt">--exclude-if-present '.borg-ignore'</span> excludes the directory
from the backup if there is a file named <span class="stt">.borg-ignore</span> in that directory.
I use this for excluding directories that don’t neatly fit some pattern in
<span class="stt">borg-exclude</span>, such as large git repos that I contribute to infrequently but
don’t manage, or cache or temporary directories.</p></li>
<li>
<p>The option <span class="stt">--lock-wait</span> specifies how long to wait for a lock.
Only one client can write to the backup repository at a time.
I use 10 minutes as a default; my clients usually only take a minute or so to
finish running a backup, so waiting 10 minutes should be enough for all clients
to finish if there’s contention.</p></li>
<li>
<p>Line 47, <span class="stt">$REPO::'{hostname}+' ...</span>, tells <span class="stt">borg</span> where the backup
repository is located (before the <span class="stt">::</span>), and what the backup archive should be
named.
I name the archive using the hostname of the client, followed by <span class="stt">+</span> as a
delimiter, followed optionally by some tag, followed by a timestamp.
This naming scheme makes it easy to sort and filter backups when validating
backups or searching for a restore point.</p></li>
<li>
<p>The remaining lines are files or directories to include in the backup archive.
All files and sub-directories, recursively, are includes, unless excluded by
one of the above exclude options.</p></li></ul></div>
<h2>4.2
<tt> </tt><a name="(part._sec~3aconfig-client~3aexclude)"></a>Exclude Extraneous Files From Backup</h2>
<p>My <span class="stt">~/borg-exclude</span> file is below.
Install this file in <span class="stt">~/</span> on the client; it only needs read permissions for
<span class="stt">client-user</span>.</p>
<p></p>
<div class="SIntrapara"><a href="//resources/@|filename|">borg-exclude</a></div>
<div class="SIntrapara">
<div class="brush: shell">
<pre><code>re:/\.ssh
re:/\.bash_history
.zsh_*
re:/no-backup/
re:/\.junk/
re:/\.cron/
re:workspace/aur4/.*/pkg
re:workspace/aur4/.*/src
re:compiled/
*.tar.xz
*.tar.gz
*/.emacs.d
*/.unison/fp*
*/.unison/ar*
*/.vim/bundle
*~
.*.trash
*.aux
*.log
*.out
*.toc
*.fls
*.swp
*.class
*.pyc
*.fdb_latexmk
*.o
*.out
*.xpi
*.zo
*.dep
*.vo
*.glob
*.bbl
*.safe
*.agdai
*.hi
*.tdo
re:\.mutt/cache
re:\.mutt/sent
re:workspace/.*/paper.pdf
re:workspace/.*/techrpt.pdf
re:workspace/.*/final.pdf
*/retex-cache/*
re:\.gnupg/S\..*
re:\.~lock.*\.odp#
y
re:/Pictures/.*/\._
re:/Pictures/.*/\.comments
*.DS_Store</code></pre></div></div>
<p>This configuration file accepts exclude patterns, one per line.
Each exclude pattern can be either a shell glob or regexp pattern prefixed by
<span class="stt">re:</span>.
I exclude lots of generated files patterns, certain mail folders, and files or
folders that are tracked by other systems.
Some depend on my workflows and naming conventions, so they might not be
relevant to you.</p>
<p>If I want to exclude some folder that doens’t neatly fit a pattern, I run
<span class="stt">touch path/to/folder/.borg-ignore</span>, and <span class="stt">borg</span> will automatically
begin ignoring it due to the <span class="stt">--exclude-if-present</span> option in
<span class="stt">borg-backup.sh</span>.</p>
<p>Be sure to run <span class="stt">touch ~/backups/.borg-ignore</span>.
This will prevent you from DOSing yourself if either you use a client-only
configuration, or if your clients are also mirrors.</p>
<h2>4.3
<tt> </tt><a name="(part._.Configure_.Access_to_the_.Backup_.Repository)"></a>Configure Access to the Backup Repository</h2>
<p>Finally, we need to make sure the backup script has uninterrupted access to the
backup repository.</p>
<h3>4.3.1
<tt> </tt><a name="(part._.Client-only_.Repository_.Folder)"></a>Client-only Repository Folder</h3>
<p>If you’re using a client-only configuration, you’re done!</p>
<h3>4.3.2
<tt> </tt><a name="(part._.Backup_.Server_via_.S.S.H)"></a>Backup Server via SSH</h3>
<p>If you’re running a separate server, we’ll configure SSH access.
Ideally, we don’t even want to be prompted for an SSH key password to ensure
backups are running uninterrupted.
(Although, I do deal with this on one of my clients, because I haven’t
configured the keychain to cache the SSH key while logged in.)</p>
<p>I recommend configuring access through the <span class="stt">.ssh/config</span> file, and either a
keychain that caches your SSH key that you use everywhere (probably acceptable
security), or a fresh passwordless SSH key the provides <span class="stt">client-user</span>
restricted access to <span class="stt">borg</span> as the <span class="stt">backupd</span> user on
<span class="stt">backup-server.tld</span> (better practice security).</p>
<p>I’ll assume you have a fresh passwordless private key called
<span class="stt">~/.ssh/id_rsa-borg-client</span> paired with the public key
<span class="stt">~/.ssh/id_rsa-borg-client.pub</span> on the client machines.
You can generate a fresh passwordless key-pair with:</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">ssh-keygen -t rsa -b 4096 -C "borg client" -f /home/client-user/.ssh/id_rsa-borg-client -P ""</span></p></td></tr></tbody></table></div>
<p>Make sure to set the permissions correctly, restricting access to the private key.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">chmod 600 ~/.ssh/id_rsa-borg-client</span></p></td></tr></tbody></table></div>
<p>Add the following snippet to your <span class="stt">.ssh/config</span>, and the
<span class="stt">borg-backup.sh</span> will automatically use the SSH key
<span class="stt">~/.ssh/id_rsa-borg-client</span> on the client machine when connecting as
<span class="stt">backupd</span> to the <span class="stt">backup-server.tld</span>.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">Host borg-server</span></p></td></tr>
<tr>
<td>
<p><span class="stt"></span><span class="hspace"> </span><span class="stt">Hostname backup-server.tld</span></p></td></tr>
<tr>
<td>
<p><span class="stt"></span><span class="hspace"> </span><span class="stt">IdentityFile ~/.ssh/id_rsa-borg-client</span></p></td></tr>
<tr>
<td>
<p><span class="stt"></span><span class="hspace"> </span><span class="stt">User backupd</span></p></td></tr>
<tr>
<td>
<p><span class="stt"></span><span class="hspace"> </span><span class="stt">ForwardAgent no</span></p></td></tr></tbody></table></div>
<h3>4.3.3
<tt> </tt><a name="(part._.Least_.Priviledge_for_.Client_.S.S.H_.Key)"></a>Least Priviledge for Client SSH Key</h3>
<p>If you want to follow better practice security, you should restrict access for
the <span class="stt">id_rsa-borg-client</span> key so it has only the permission it needs: to
communicate with the <span class="stt">borg</span> server.
Add the following line to <span class="stt">~/.ssh/authorized_keys</span> for <span class="stt">backupd</span> on
the server, replacing <span class="stt"><id_rsa-borg-client.pub></span> by the contents of the
public key <span class="stt">~/.ssh/id_rsa-borg-client.pub</span> from the client.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">command="/home/backupd/.ssh/ssh-borg-serve.sh",no-pty,no-agent-forwarding,no-port-forwarding <id_rsa-borg-client.pub></span></p></td></tr></tbody></table></div>
<p>Next, install the following file in <span class="stt">~/.ssh/</span> on the server and give it
execute permissions with <span class="stt">chmod +x ~/.ssh/ssh-borg-serve.sh</span>.</p>
<p></p>
<div class="SIntrapara"><a href="//resources/@|filename|">ssh-borg-serve.sh</a></div>
<div class="SIntrapara">
<div class="brush: shell">
<pre><code>#!/bin/sh
set -f
case "$SSH_ORIGINAL_COMMAND" in
"borg serve"*)
exec $SSH_ORIGINAL_COMMAND
;;
# "/usr/lib/ssh/sftp-server")
# exec /usr/lib/ssh/sftp-server -R
# ;;
*)
echo "Invalid command $SSH_ORIGINAL_COMMAND"
exit 1
;;
esac</code></pre></div></div>
<p>This will allow the key <span class="stt">id_rsa-borg-client</span> to run <span class="emph">only</span> a command
starting with <span class="stt">borg serve</span>, which launches the <span class="stt">borg</span> server.
If an attacker gets your <span class="stt">id_rsa-borg-client</span> key, they can launch the
<span class="stt">borg</span> server, but without the backup repository password, they won’t be
able to do anything.</p>
<p>The second, commented out, command would enable the client to launch a read-only
SFTP server.
This is useful for making all clients mirrors.
However, allowing the client key to also use the SFTP server violates the
principle of least privilege, and you should instead configure a separate mirror
key as described in <a data-pltdoc="x" href="#%28part._sec~3amirrors%29">Configure Mirrors</a>.
An attacker with SFTP access would be able to download the encrypted repository,
and possibly read other files on the server.</p>
<h1>5
<tt> </tt><a name="(part._sec~3amirrors)"></a>Configure Mirrors</h1>
<p>Having backups stored offsite is good, but what if the server goes down, or
is struck by a meteor?
It’s best to have not only offsite backups, but redundant offsite backups.
Thankfully, this is easy to support.
Particularly, if you, like me, have too many computers: a laptop, a desktop, a
media server, a VPS, and a work computer... mirrors galore!</p>
<p>On each mirror, we configure <span class="stt">rclone</span> with the server as a remote.
Add the following to <span class="stt">~/.config/rclone/rclone.conf</span> on the mirror.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">[borg-server]</span></p></td></tr>
<tr>
<td>
<p><span class="stt">type = sftp</span></p></td></tr>
<tr>
<td>
<p><span class="stt">host = backup-server.tld</span></p></td></tr>
<tr>
<td>
<p><span class="stt">user = backupd</span></p></td></tr>
<tr>
<td>
<p><span class="stt">port =</span></p></td></tr>
<tr>
<td>
<p><span class="stt">pass =</span></p></td></tr>
<tr>
<td>
<p><span class="stt">key_file = id_rsa-borg-mirror</span></p></td></tr>
<tr>
<td>
<p><span class="stt">md5sum_command = md5sum</span></p></td></tr>
<tr>
<td>
<p><span class="stt">sha1sum_command = sha1sum</span></p></td></tr></tbody></table></div>
<p>This tells <span class="stt">rclone</span> how to connect to the server via SFTP.
Following principle of least privilege, we’ll need a new key pair for the
mirror.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">ssh-keygen -t rsa -b 4096 -C "borg mirror" -f /home/client-user/.ssh/id_rsa-borg-mirror -P ""</span></p></td></tr>
<tr>
<td>
<p><span class="stt">chmod 600 ~/.ssh/id_rsa-borg-mirror</span></p></td></tr></tbody></table></div>
<p>And we need to install and restrict the key on the server.
Add the following line to the <span class="stt">~/.ssh/authorized-keys</span> file on the server.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">command="/home/backupd/.ssh/ssh-borg-mirror.sh",no-pty,no-agent-forwarding,no-port-forwarding <id_rsa-borg-mirror.pub></span></p></td></tr></tbody></table></div>
<p>Next, install the following file <span class="stt">~/.ssh/</span> on the server and give it
execute permissions with <span class="stt">chmod +x ~/.ssh/ssh-borg-mirror.sh</span>.</p>
<p></p>
<div class="SIntrapara"><a href="//resources/@|filename|">ssh-borg-mirror.sh</a></div>
<div class="SIntrapara">
<div class="brush: shell">
<pre><code>#!/bin/sh
set -f
case "$SSH_ORIGINAL_COMMAND" in
"/usr/lib/ssh/sftp-server")
exec /usr/lib/ssh/sftp-server -R
;;
*)
echo "Invalid command $SSH_ORIGINAL_COMMAND"
exit 1
;;
esac</code></pre></div></div>
<p>This restricts the mirror’s key so it can only be used to launch the SFTP server
in read-only mode.</p>
<p>Finally, set up a cron job to mirror the repository.
Run <span class="stt">crontab -e</span> on the mirror and enter:</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">@hourly rclone sync borg-server:backups ~/backups</span></p></td></tr></tbody></table></div>
<p><span class="stt">rclone</span> will perform a one-way sync from the server to the mirror every
hour.
<span class="stt">rclone</span> uses a delta transfer algorithm with caching.
It’s faster than <span class="stt">rsync</span>, but with the same low-bandwidth transfer.
It also supports more backends than <span class="stt">rsync</span>, so you can set up additional
mirrors to cloud services like Dropbox, Google Drive, etc, if you want.</p>
<p>Now when a meteor strikes your server just after a burglar stole your laptop,
you’ll still have your data.
Setup LOTS of mirrors for extra redundancy.</p>
<h2>5.1
<tt> </tt><a name="(part._.Least_.Priviledge_for_.Mirrors)"></a>Least Priviledge for Mirrors</h2>
<p>I know it seems like we already did this with the whole read-only SFTP server,
but that’s not enough.
Right now, an attacker compromising the mirror key can read <span class="emph">any</span> file that
<span class="stt">backupd</span> has access to.
That’s no good.
Better security practice would be to configure the SSH daemon to <span class="stt">chroot</span> the
mirror to the <span class="stt">~/backups</span> directory, so they can only read this folder.
Recall this folder is encrypted, so an attacker compromising the mirror SSH key
still has to break the encryption to get anything.</p>
<p>Unfortunately, this requires root access on the server, reconfiguring the SSH
daemon, and creating and managing multiple user and group permissions, which you
may be unable or unwilling to do.</p>
<p>To <span class="stt">chroot</span> the mirror, we need a second user on the server, which I’ll call
<span class="stt">mirrord</span>.
The <span class="stt">ssh-borg-mirror.sh</span> script and addition to <span class="stt">authorized_keys</span> we
added to <span class="stt">backupd</span> above should be thrown out, as we require a different
configuration to <span class="stt">chroot</span>.</p>
<p>Next, we need a new group, <span class="stt">mirrorg</span>, to provide <span class="stt">mirrord</span> read access
to the directory <span class="stt">~backupd/backups</span>, owned by <span class="stt">backupd</span>.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">groupadd mirrorg</span></p></td></tr>
<tr>
<td>
<p><span class="stt">gpasswd -a mirrord mirrorg</span></p></td></tr></tbody></table></div>
<p>Now we set the group on <span class="stt">~/backups</span> to <span class="stt">mirrorg</span>, and provide the
group read access.
As user <span class="stt">backupd</span>, run the following commands.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">chgrp ~backupd/backups</span></p></td></tr>
<tr>
<td>
<p><span class="stt">chmod g+r -R ~backupd/backups</span></p></td></tr></tbody></table></div>
<p>We need to modify the <span class="stt">ssh-borg-serve.sh</span> script (owned by <span class="stt">backupd</span>)
to maintain the group-read permission.
Change the file using the following diff.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">- exec $SSH_ORIGINAL_COMMAND</span></p></td></tr>
<tr>
<td>
<p><span class="stt">+ exec borg serve --umask=027</span></p></td></tr></tbody></table></div>
<p>This will force the <span class="stt">borg</span> server to provide read permissions to
<span class="stt">mirrorg</span> when writing to the backup repository.</p>
<p>Now, modify the SSH daemon to <span class="stt">chroot</span> the <span class="stt">mirrord</span> user.
As <span class="stt">root</span> on the server, add the following to <span class="stt">/etc/ssh/sshd_config</span>.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">Match User mirrord</span></p></td></tr>
<tr>
<td>
<p><span class="stt"></span><span class="hspace"> </span><span class="stt">ChrootDirectory ~backups/backupd</span></p></td></tr>
<tr>
<td>
<p><span class="stt"></span><span class="hspace"> </span><span class="stt">ForceCommand internal-sftp -R</span></p></td></tr>
<tr>
<td>
<p><span class="stt"></span><span class="hspace"> </span><span class="stt">AllowTcpForwarding no</span></p></td></tr>
<tr>
<td>
<p><span class="stt"></span><span class="hspace"> </span><span class="stt">X11Forwarding no</span></p></td></tr>
<tr>
<td>
<p><span class="stt"></span><span class="hspace"> </span><span class="stt">PasswordAuthentication no</span></p></td></tr></tbody></table></div>
<p>Finally, add the following line to <span class="stt">~/.ssh/authorized_keys</span> for <span class="stt">mirrord</span>.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt"><id_rsa-borg-mirror.pub></span></p></td></tr></tbody></table></div>
<p>Note that we do not require any restrictions, since the SSH daemon is already
restricting <span class="stt">mirrord</span>.</p>
<p>Now you have a pretty secure mirror.</p>
<h1>6
<tt> </tt><a name="(part._sec~3amonitor)"></a>Monitor and Check Backups</h1>
<h2>6.1
<tt> </tt><a name="(part._.Check_.Backups_are_.Happening)"></a>Check Backups are Happening</h2>
<p>Backups are no good if you can’t restore from them.
I have a weekly reminder to check on my backups.
To check, I run <span class="stt">borg list -P machine-name+</span> on the repository machine
(server, or client-only), which lists the backups for the machine with
<span class="stt">hostname</span> "machine-name".
I check to see that hourly backups are being created for each client.
If they aren’t, the daemon on that client may not be working for some reason.</p>
<h2>6.2
<tt> </tt><a name="(part._.Integrity_.Check_the_.Repository)"></a>Integrity Check the Repository</h2>
<p>Every month of so, I run <span class="stt">borg check ~/backups</span>.
This runs some integrity checks on the whole repository, and can take a while.
I recommend running it in a <span class="stt">screen</span> session so you can disconnect and
check back on it later.
I’ve never had any integrity problems.</p>
<h2>6.3
<tt> </tt><a name="(part._.Prune_.Expired_.Snapshots)"></a>Prune Expired Snapshots</h2>
<p>I don’t want to keep hourly snapshots forever.
I have a policy for expiring backups, and a script for doing it.
I keep hourly snapshots for the last 24 hours, daily snapshots for the last
week, weekly snapshots for the last month, and monthly snapshots forever.
With deduplication and my workload, this strikes a good balance between data
recovery and minimizing the repository size.</p>
<p>Each week after checking my backups, I run the following script to prune any
expired snapshots:</p>
<p></p>
<div class="SIntrapara"><a href="//resources/@|filename|">borg-prune.sh</a></div>
<div class="SIntrapara">
<div class="brush: shell">
<pre><code>#!/bin/sh
# borg-prune.sh
## Usage
# - borg-prune.sh machine-name Perform a pruning dry-run, seeing what
# would be pruned.
# - borg-prune.sh machine-name --wet Perform a non-dry run.
REPO=$HOME/backups
DRY_RUN="-n"
if [[ "$2" == "--wet" ]]; then
echo "Pruning..."
DRY_RUN=""
fi
borg prune --list $REPO --prefix "$1+" \
--keep-hourly 24 \
--keep-daily 7 \
--keep-weekly 4 \
--keep-monthly -1 \
--keep-yearly -1 \
$DRY_RUN \
-v</code></pre></div></div>
<h2>6.4
<tt> </tt><a name="(part._.Finding_.Large_.Extraneous_.Files_in_the_.Repository)"></a>Finding Large Extraneous Files in the Repository</h2>
<p>Sometimes, a large file will get backed up and make the repository unnecessary
large.
A few times, I’ve accidental backed up the entire repository in itself, DOSing
my VPS by filling the drive.</p>
<p><span class="stt">borg</span> makes it sort of easy to find these mistakes.</p>
<p>On the repository machine, run <span class="stt">borg info -P machine-name+</span> to get a print
out of the size of each archive for <span class="stt">machine-name</span>.
When one of the archives prints out as suddenly larger, that’s usually a good target.
Copy that archive name; I’ll call it <span class="stt">$archive_name</span>.</p>
<p>Next, we mount the archive to see what files are too large.
Run the following commands on repository machine.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">mkdir -p /tmp/borg</span></p></td></tr>
<tr>
<td>
<p><span class="stt">borg mount ~/backups::$archive_name</span></p></td></tr></tbody></table></div>
<p>Now we can explore the mounted archive to find large files.
I run the command the following command, which I alias as <span class="stt">ducks</span> in my shell.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">du -sch * .* | sort -rn | head</span></p></td></tr></tbody></table></div>
<p>This will print out a list of the 10 largest files or folders in the current
directory.
You might need to exclude the <span class="stt">.*</span> pattern if there are no hidden files.</p>
<p>I then follow the large directories until I find a likely looking file; call it
<span class="stt">/path/to/large-unnecessary-file</span>.</p>
<p>Once we find a file, we want to exclude it from further backups and remove it
from existing backups.
I add it to the <span class="stt">borg-exclude</span> patterns or add a <span class="stt">.borg-ignore</span> file
as appropriate.
Then, I run the following loop to recreate and filter all archives.
This loop is in <span class="stt">fish</span> syntax; you’ll need to figure out loops in your
shell on your own, because I’ve never figured out how to write a shell loop
properly.</p>
<p>I’ve never had any problems, but <span class="emph">you should backup your repository before
running <span class="stt">borg recreate</span></span>.
Use <span class="stt">rclone</span> to put it anywhere else, at least temporarily.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">for archive in (borg list --lock-wait 600 -P machine-name+ ~/backups | cut -f 1 -d ' ')</span></p></td></tr>
<tr>
<td>
<p><span class="stt"></span><span class="hspace"> </span><span class="stt">yes YES | borg recreate --lock-wait 600 -C lzma,9 -s --exclude "/path/to/large-unnecessary-file" backups::$archive</span></p></td></tr>
<tr>
<td>
<p><span class="stt">end</span></p></td></tr></tbody></table></div>
<p>This is considered experimental, so it requires that you confirm each recreation
by typing "YES".
I just pipe <span class="stt">yes YES</span> because I like to live on the edge, and have mirrors
of this repository if I break something.</p>
<p><span class="stt">borg recreate</span> can take multiple <span class="stt">--exclude</span> flags if you find
multiple files you want removed.
It will also recompress the archive, so you can specify new and different
compression options with <span class="stt">-C</span>, if you want to change the compression
algorithm.</p>
<p>Now the file should be excluded from all existing archives.</p>
<h1>7
<tt> </tt><a name="(part._.Restore_from_.Backups)"></a>Restore from Backups</h1>
<p>In the likely event that you need to restore from backups, run <span class="stt">borg list
-P machine-name+</span> to list the archives available for <span class="stt">machine-name</span>.
This will give you a list of archive names on the left, with some metadata on
the right.
Copy and paste the name for the archive you want to restore from; I’ll call this
<span class="stt">$archive_name</span>.</p>
<p>Next, we mount that archive.
Running the following commands, which will create a temporary mount point and
mount the archive.</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">mkdir -p /tmp/borg</span></p></td></tr>
<tr>
<td>
<p><span class="stt">borg mount ~/backups::$archive_name</span></p></td></tr></tbody></table></div>
<p>You can now see all your backed-up files in <span class="stt">/tmp/borg</span>.</p>
<p>Next, from the client, copy over your files:</p>
<div class="SCodeFlow">
<table cellpadding="0" cellspacing="0" class="SVerbatim">
<tbody>
<tr>
<td>
<p><span class="stt">rsync -avz --progress backupd@backup-server.tld:/tmp/borg/ /</span></p></td></tr></tbody></table></div>Copy/pasting your password into the Runescape Clienturn:https-www-williamjbowman-com:-blog-2020-04-27-copy-pasting-your-password-into-the-runescape-client2020-04-27T09:25:58Z2020-04-27T09:25:58ZWilliam J. Bowman
<p>In a fit of nostalgia, I wanted to play some Runescape this weekend. I discovered that Runescape forbids copy and pasting your password into the client, for bogus security reasons. This poses a problem for me, since my password is a very long randomly generated string. Normally, I would copy and paste it from my password manager.</p>
<p>Thankfully, a little Powershell scripting solves the problem. The script below will, upon execution, switch to the Runescape client and type your password. You need to configure one variable, <code>$password</code>, which should be set using a command the reads your password from your password manager (or, if you don’t care about security, set to your password as a string literal). The default uses my configuration, fetching the password from <code>pass</code> via WSL.</p>
<p>Be careful not to run the script while you’re already logged in, or it might enter your password in chat. It shouldn’t, and it won’t hit enter, but… use at your own risk.</p>
<p><a href="/resources/runescape-login.ps1">runescaope-login.ps1</a></p>
<pre><code>## --------------------------------------------------------------------
## Instructions:
# Launch Runescape then run this script while on the login page.
#
# You may need to switch Runescape between windowed and full screen
# after, as alt-tabbing or this script sometimes screws up full screen.
## --------------------------------------------------------------------
## Configure:
# Your runescape password
# $password = "my hard coded password"
# $password = get-password-command
$password = (wsl /usr/bin/pass show runescape.com `| head -n 1)
# Delay.
# How long to wait between grabbing Runescape window and starting to type.
$delay = 1
## --------------------------------------------------------------------
function Show-Process($Process) {
$sig = '
[DllImport("user32.dll")] public static extern bool ShowWindowAsync(IntPtr hWnd, int nCmdShow);
[DllImport("user32.dll")] public static extern int SetForegroundWindow(IntPtr hwnd);
'
$type = Add-Type -MemberDefinition $sig -Name WindowAPI -PassThru
$hwnd = $process.MainWindowHandle
$null = $type::ShowWindowAsync($hwnd, 5)
$null = $type::SetForegroundWindow($hwnd)
}
Show-Process (Get-Process -Name rs2client)
timeout $delay
Add-Type -AssemblyName System.Windows.Forms
$password.ToCharArray() | ForEach-Object {[System.Windows.Forms.SendKeys]::SendWait($_)}</code></pre>Running a public server from WSL 2urn:https-www-williamjbowman-com:-blog-2020-04-25-running-a-public-server-from-wsl-22020-04-25T23:32:30Z2020-04-25T23:32:30ZWilliam J. Bowman
<p>This week, for ReAsOnS, I wanted to run a server on WSL 2 that was accessible from the internet. This was surprisingly involved and requires lots of hard-to-find tricks to forward ports through 4 different layers of network abstractions and firewalls.</p>
<ol>
<li>In WSL, make sure your server is using IPv4. I spent a hell of a long time just trying to figure out why I couldn’t access the server from localhost. I had successfully run a handful of local http servers from WSL that were accessible from the Windows host, so I wasn’t sure what the problem was. It turns out this server, written in Java, wouldn’t work until I added <code>-Djava.net.preferIPv4Stack=true</code> to the <code>java</code> options. It appears that Java was defaulting to IPv6, and WSL doesn’t forward IPv6 properly, or something.</li>
<li>In WSL, make sure you allow the port through your WSL firewall, if you’re using one. Using a WSL firewall might be redundant, but you might be using one. I usually use <code>ufw</code> in my linux machines, so run I’d run <code>ufw allow $PORT</code> in WSL.</li>
<li>In Windows, forward your port from the public IP port to the WSL port using <code>netsh interface portproxy add v4tov4 listenport=$PORT
listenaddress=0.0.0.0 connectport=$PORT connectaddress=127.0.0.1</code> in a Powershell with admin rights. This is one of the hard-to-find but necessary WSL specific bits. It look like Windows creates a virtual adapter that isn’t properly bridged with your internet network adapter. I tried playing various bridging tricks, but in the end, I had to manually create a <code>portproxy</code> rule using Windows’ network shell <code>netsh</code>. This listens on all addresses and forwards the connection to the <code>localhost</code>, which seems to be automatically bridged with WSL. You can also try to manually forward it to the WSL adapter. Use <code>ipconfig</code> to find it. However, the WSL IP changes from time to time, so I recommend using local host instead. It might also be wise to listen explicitly on your internet facing IP instead of <code>0.0.0.0</code>, but this seemed to work.</li>
<li>In Windows, allow the port through the Windows firewall explicitly by adding a new <code>Inbound Rule</code> using the <code>Windows Defender Firewall with Advanced
Security</code> administrative tool. This is accessible as <code>WF.msc</code> in <code>cmd</code> and Powershell. Select <code>Inbound Rule</code>, and click <code>New rule...</code> in the action menu to the right, and work your way through the menu to allow the port explicitly. Normally, Windows asks if you want to allow applications through the firewall. This doesn’t seem to happen with WSL servers, so we have to manually add a rule.</li>
<li>In your router, setup port forwarding for the port.</li></ol>A Transparent Ad-Blocking VPN via SoftEther + Privoxyurn:https-www-williamjbowman-com:-blog-2015-12-22-a-transparent-ad-blocking-vpn-via-softether-privoxy2015-12-23T06:29:00Z2015-12-23T06:29:00ZWilliam J. Bowman
<p>I recently<sup><a href="#2015-12-22-a-transparent-ad-blocking-vpn-footnote-1-definition" name="2015-12-22-a-transparent-ad-blocking-vpn-footnote-1-return">1</a></sup>, finally, got a smart phone—an iPhone. One of the first things that annoyed me were the ads. I use <a href="https://adblockplus.org/">Ad-Block Plus</a> on all my computers and I have not been bothered by ads in quite some time.</p>
<p>One approach to removing ads is rooting my phone and installing a customized hosts file. This approach has several flaws. I once tried this approach on my android tablet. While better than nothing, it misses many ads and tends to interrupt normal internet use.</p>
<p>Another approach, as of iOS 9, is to use Safari content filters. However, this requires me to use Safari, and I prefer Firefox.</p>
<p>After lots of tinkering and reading and thinking, the best approach seems to be a VPN with proxy that seamlessly block ads (and potentially provide additional security, privacy, caching, and etc). There are apps that provide a VPN with ad blocking proxy, but reading their privacy policies caused me great concern. So I decided to setup my own.</p>
<!-- more-->
<h2 id="credits">Credits</h2>
<p>Some of this was inspired by <a href="http://lifehacker.com/5763170/how-to-secure-and-encrypt-your-web-browsing-on-public-networks-with-hamachi-and-privoxy">Lifehacker</a>. Unfortunately, their approach has several flaws. For one, they use Hamachi. I prefer to use my own server and free software. Their setup is not seamless; it requires both configuring the VPN client and configuring the client browser. I want to block anything going through port 80; mobile ads are sneaky, so I want to make certain mobile apps could not just ignore proxy settings.</p>
<h2 id="assumptions">Assumptions</h2>
<p>Before I explain my setup, I will explain some basic information about my machines and environment.</p>
<p>I run Arch Linux on all my machines, including my server. I use <code>systemd</code> as my init system. I use <code>ufw</code> for my firewall. I use <code>dhcp</code> ([https://www.isc.org/software/dhcp][]) for my DHCP server. I use <code>yaourt</code> as my interface to Arch package repository and the AUR.</p>
<p>I assume you are comfortable with, and in fact prefer, a terminal. I will prefix terminal commands that must be run as root or with sudo by “sudo”, such as <code>sudo rm -rf /</code>, and I will not prefix terminal commands that should be run as an unprivileged, such as <code>echo 120</code>.</p>
<p>I assume you have a static public IPv4 address which I will call <code>$PUBLIC_IP</code>, a second static private IPv4 address which I will call <code>$PRIVATE_IP</code>. When we setup the VPN, I will assume you use the subnet <code>10.10.1.1/24</code>. I assume your the ethernet card connected to the internet is <code>eth0</code>.</p>
<p>I am using SoftEther version 4.19 Build 9599 and Privoxy version 3.0.23.</p>
<h2 id="introduction">Introduction</h2>
<p>The plan is this: the user should configure the device to connect via VPN. After that, all traffic should go through the VPN, and HTTP traffic should go through an ad-blocking, potentially caching, proxy, and back to the device. I do not consider caching at this point, although this approach will apply with a caching proxy equally well. Due to security concerns, I currently only block ads for HTTP content, although I will speculate about how to block ads for HTTPS content.</p>
<p>I will provide links to all the configuration files, although I will show excerpts of the files here to aid in explanation.</p>
<h2 id="softether-vpn">SoftEther VPN</h2>
<p>I decided to use <a href="https://softether.org/">SoftEther</a> because it can emulate many VPN implementations, is developed by a bunch of clever academics, and provides some unique features like VPN over DNS and VPN over ICMP. It has both command-line and GUI tools and is cross-platform. I do not discuss using VPN over DNS or over ICMP in this article.</p>
<p>SoftEther can be configured either using a local bridge or the SecureNAT feature. One of them is required to assign IPs to the VPN clients and handle routing traffic through the single static public IP. The SecureNat feature is easy to use, but slow and prevents us from routing traffic manually (such as to an ad-blocking proxy), so we will use a local bridge.</p>
<h3 id="installing-softether">Installing SoftEther</h3>
<p>Installing SoftEther VPN is simple on Arch Linux: <code>yaourt -S softethervpn-git</code>.</p>
<h3 id="configuring-softether">Configuring SoftEther</h3>
<p>SoftEther comes with a interactive configuration tool called <code>vpncmd</code>. I found this much easier to use than trying to edit the configuration file by hand. SoftEther also has a GUI configuration utility, but I never tried to use it. Unfortunately, the SoftEther documentation provides all the instructions using this GUI tool, so if you do not want to follow my steps quite precisely, you may be on your own. Most of my instructions are adapted from this <a href="http://blog.lincoln.hk/blog/2013/03/19/softether-on-vps/">helpful blog post</a> and directly from the <a href="https://www.softether.org/4-docs/1-manual/7._Installing_SoftEther_VPN_Server/7.4_Initial_Configurations">SoftEther documentation</a>.</p>
<p>After installing SoftEther, you need to start the server. The server must be started before it can be configured. You can (should) have your firewall active during this time, as you will only need to access the server locally via the configuration tool.</p>
<div class="brush: sh">
<pre><code>sudo systemctl start softethervpn-server.service</code></pre></div>
<p>Next, launch the configuration tool:</p>
<div class="brush: sh">
<pre><code>sudo vpncmd
vpncmd command - SoftEther VPN Command Line Management Utility
SoftEther VPN Command Line Management Utility (vpncmd command)
Version 4.19 Build 9599 (English)
Compiled 2015/10/19 20:09:05 by yagi at pc30
Copyright (c) SoftEther VPN Project. All Rights Reserved.</code></pre></div>
<p>Enter <code>1</code> to select the management of server menu. Next enter <code>localhost:5555</code> for the hostname of destination. Then leave the Virtual Hub Name empty and press enter. One you create an administrator password, you will need to enter a password at this point in the menu. By default, there is no administrator password. You should now be in the administrator menu for the VPN server.</p>
<pre><code>By using vpncmd program, the following can be achieved.
1. Management of VPN Server or VPN Bridge
2. Management of VPN Client
3. Use of VPN Tools (certificate creation and Network Traffic Speed
Test Tool)
Select 1, 2 or 3: 1
Specify the host name or IP address of the computer that the
destination VPN Server or VPN Bridge is operating on.
By specifying according to the format 'host name:port number', you can
also specify the port number.
(When the port number is unspecified, 443 is used.)
If nothing is input and the Enter key is pressed, the connection will
be made to the port number 8888 of localhost (this computer).
Hostname of IP Address of Destination: localhost:5555
If connecting to the server by Virtual Hub Admin Mode, please input
the Virtual Hub name.
If connecting by server admin mode, please press Enter without
inputting anything.
Specify Virtual Hub Name:
Connection has been established with VPN Server "localhost" (port
5555).
You have administrator privileges for the entire VPN Server.
VPN Server>help
....</code></pre>
<p>You will probably want to set an administrator password first. Enter <code>ServerPasswordSet</code> in the prompt:</p>
<pre><code>VPN Server> ServerPasswordSet
ServerPasswordSet command - Set VPN Server Administrator Password
Please enter the password. To cancel press the Ctrl+D key.
Password: ******
Confirm input: ******
The command completed successfully.
VPN Server></code></pre>
<p>Before we can create a user, we must select a hub, as users are local to hubs. For our purposes, we can use the default hub:</p>
<pre><code>VPN Server>Hub DEFAULT
Hub command - Select Virtual Hub to Manage
The Virtual Hub "DEFAULT" has been selected.
The command completed successfully.
VPN Server/DEFAULT></code></pre>
<p>Now we create a user:</p>
<pre><code>VPN Server/DEFAULT>UserCreate
UserCreate command - Create User
User Name: exampleusername
Assigned Group Name:
User Full Name: John Smith
User Description:
The command completed successfully.
VPN Server/DEFAULT>UserPasswordSet
UserPasswordSet command - Set Password Authentication for User Auth
Type and Set Password
User Name: exampleusername
Please enter the password. To cancel press the Ctrl+D key.
Password: **********
Confirm input: **********
The command completed successfully.
VPN Server/DEFAULT></code></pre>
<p>For privacy reasons, you may want to disable the packet log:</p>
<pre><code>VPN Server/DEFAULT>LogDisable
LogDisable command - Disable Security Log or Packet Log
Select Security or Packet: Packet
The command completed successfully.
VPN Server/DEFAULT></code></pre>
<p>Now we enable IPsec, which handles all the encryption of our VPN connection. I enable IPsec, and disable raw (unencrypted) L2TP. As far as I know, EtherIP / L2TPv3 are for site-to-site VPN, not for client-to-site, so I leave this disabled. You will need a create a shared secret key, and remember it for configuring the device later.</p>
<pre><code>VPN Server/DEFAULT>IPsecEnable
IPsecEnable command - Enable or Disable IPsec VPN Server Function
Enable L2TP over IPsec Server Function (yes / no): yes
Enable Raw L2TP Server Function (yes / no): no
Enable EtherIP / L2TPv3 over IPsec Server Function (yes / no): no
Pre Shared Key for IPsec (Recommended: 9 letters at maximum): a-secret
Default Virtual HUB in a case of omitting the HUB on the Username:
DEFAULT
The command completed successfully.
VPN Server/DEFAULT></code></pre>
<p>At this point, you should have a working VPN server. You could use the SecureNAT to test your connection now:</p>
<pre><code>VPN Server/DEFAULT>SecureNatEnable
SecureNatEnable command - Enable the Virtual NAT and DHCP Server
Function (SecureNat Function)
The command completed successfully.</code></pre>
<p>You may need to open several ports in your firewall. I installed the the following <code>ufw</code> application rules then ran <code>sudo ufw enable SoftEther</code>.</p>
<h5 id="httpsgistgithubcomwilbowmace7516a3219cd7d9a5bffile-ufw-softetheretcufwapplicationsdsoftether"><a href="https://gist.github.com/wilbowma/ce7516a3219cd7d9a5bf#file-ufw-softether">/etc/ufw/applications.d/softether</a></h5>
<div class="brush: ini">
<pre><code># /etc/ufw/applications.d/softether
[SoftEther]
title=SoftEther VPN
description=SoftEther VPN
ports=500,1701,4500/udp|1701,1723/tcp</code></pre></div>
<p>You may also want to browse the confiuration file and make changes. To do this, you should first turn off the server:</p>
<div class="brush: sh">
<pre><code>sudo systemctl stop softethervpn-server.service</code></pre></div>
<p>As of this writing, the configuration file is located in <code>/usr/lib/softethervpn/vpnserver/vpn_server.config</code>.</p>
<h3 id="setting-up-a-local-bridge">Setting up a local bridge</h3>
<p>While the VPN is working, the SecureNAT is slow, resource intensive, and prevents us from creating a transparent proxy. Now we must setup the local bridge. Some of these instructions are adapted from <a href="http://blog.lincoln.hk/blog/2013/05/17/softether-on-vps-using-local-bridge/">this blog post</a>, but that blog features only GUI configuration instructions for SoftEther.</p>
<p>You will need to start the VPN server again if you turned it off previously. Run <code>sudo vpncmd</code> and return to the administrator menu. Disable SecureNAT if you enabled is previously:</p>
<div class="brush: sh">
<pre><code>VPN Server/DEFAULT>SecureNatDisable
SecureNatDisable command - Disable the Virtual NAT and DHCP Server
Function (SecureNat Function)
The command completed successfully.
VPN Server/DEFAULT></code></pre></div>
<p>Now we create a bridge device. We will create a tap device rather than bridge with an existing device, as this seems to simplify the transparent proxy setup. I assume you call the bridge device <code>soft</code>, but this choice is arbitrary. The prefix <code>tap_</code> will be added to this name automatically. We use the command <code>BridgeCreate</code> which takes the hub <code>DEFAULT</code>, the named argument <code>/DEVICE</code> with the name of the device <code>soft</code>, and the named argument <code>/TAP</code> with value <code>yes</code>.</p>
<div class="brush: sh">
<pre><code>VPN Server/DEFAULT>BridgeCreate DEFAULT /DEVICE:soft /TAP:yes
BridgeCreate command - Create Local Bridge Connection
....
The command completed successfully.
VPN Server/DEFAULT>BridgeList
BridgeList command - Get List of Local Bridge Connection
Number|Virtual Hub Name|Network Adapter or Tap Device Name|Status
------+----------------+----------------------------------+---------
1 |DEFAULT |soft |Operating
The command completed successfully.
VPN Server/DEFAULT>exit</code></pre></div>
<p>Now we enable a DHCP server for the VPN subnet. I configured <code>/etc/dhcpd.conf</code> as follows. The important bit is for <code>subnet 10.10.1.0</code>.</p>
<h5 id="httpsgistgithubcomwilbowmace7516a3219cd7d9a5bffile-dhcpd-confetcdhcpdconf"><a href="https://gist.github.com/wilbowma/ce7516a3219cd7d9a5bf#file-dhcpd-conf">/etc/dhcpd.conf</a></h5>
<div class="brush: nginx">
<pre><code># /etc/dhcpd.conf
# option definitions common to all supported networks...
option domain-name "xxx";
# DNS servers
option domain-name-servers 8.8.8.8, 8.8.4.4;
default-lease-time 600;
max-lease-time 7200;
# Use this to enable / disable dynamic dns updates globally.
ddns-update-style none;
# No service will be given on this subnet, but declaring it helps the
# DHCP server to understand the network topology.
subnet $PUBLIC_IP netmask 255.255.255.0 {
}
subnet $PRIVATE_IP netmask 255.255.128.0 {
}
subnet 10.10.1.0 netmask 255.255.255.0 {
option subnet-mask 255.255.255.0;
option routers 10.10.1.1;
range 10.10.1.47 10.10.1.57;
}</code></pre></div>
<p>Next we start the tap device and the DHCP server:</p>
<div class="brush: sh">
<pre><code>sudo systemctl start network@tap_soft
sudo systemctl start dhcpd4@tap_soft</code></pre></div>
<p>It would be wise to add these as dependencies to <code>softethervpn-server.service</code>. This can be done by installing the following override:</p>
<h5 id="httpsgistgithubcomwilbowmace7516a3219cd7d9a5bffile-softethervpn-server-dhcpd-confetcsystemdsystemsoftethervpn-serverserviceddhcpdconf"><a href="https://gist.github.com/wilbowma/ce7516a3219cd7d9a5bf#file-softethervpn-server-dhcpd-conf">/etc/systemd/system/softethervpn-server.service.d/dhcpd.conf</a></h5>
<div class="brush: ini">
<pre><code># /etc/systemd/system/softethervpn-server.service.d/dhcpd.conf
[Unit]
Before=dhcpd4@tap_soft.service network@tap_soft.service
Requires=dhcpd4@tap_soft.service network@tap_soft.service</code></pre></div>
<p>Before we setup traffic forwarding for the VPN, we must ensure <code>ipv4</code> forwarding is enabled in the kernel. Create the following override file, then run <code>sysctl --system</code>.</p>
<h5 id="etcsysctldipv4forwardingconf"><code>/etc/sysctl.d/ipv4_forwarding.conf</code></h5>
<div class="brush: ini">
<pre><code># /etc/sysctl.d/ipv4_forwarding.conf
net.ipv4.ip_forward = 1</code></pre></div>
<p>Finally, we must forward traffic from the tap device to the internet device. You can issue the following commands with iptables or configure <code>ufw</code> to add them on startup. I will provide both instructions, but only follow one of them.</p>
<h4 id="via-iptables">Via <code>iptables</code></h4>
<p>First, accept all traffic coming from the VPN:</p>
<div class="brush: sh">
<pre><code>sudo iptables -A INPUT -s 10.10.1.1/24 -m state --state NEW -j ACCEPT
sudo iptables -A OUTPUT -s 10.10.1.1/24 -m state --state NEW -j ACCEPT
sudo iptables -A FORWARD -s 10.10.1.1/24 -m state --state NEW -j ACCEPT</code></pre></div>
<p>Also accept all traffice from established connections:</p>
<div class="brush: sh">
<pre><code>sudo iptables -A INPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A OUTPUT -m state --state RELATED,ESTABLISHED -j ACCEPT
sudo iptables -A FORWARD -m state --state ESTABLISHED,RELATED -j ACCEPT</code></pre></div>
<p>Finally, forward all traffic from the tap device to the internet interface. If you use a static IP address on the server, use this command:</p>
<div class="brush: sh">
<pre><code>sudo iptables -t nat -A POSTROUTING -s 10.10.1.1/24 -j SNAT --to-source $PUBLIC_IP</code></pre></div>
<p>If your public IP address is not static, use this command:</p>
<div class="brush: sh">
<pre><code>sudo iptables -t nat -A POSTROUTING -s 10.10.1.1/24 -o eth0 -j MASQUERADE</code></pre></div>
<h4 id="via-ufw">Via <code>ufw</code></h4>
<p>If you use <code>ufw</code>, you can add the VPN forwarding rules to <code>/etc/ufw/before.rules</code>. These rules are in <code>iptables-save</code> format. Simiar methods may work for other higher-level firewall utilities. Below is an excerpt of the my <code>before.rules</code> file. The rules I have added are enclosed in <code>#<<< #>>></code> comment tags, at lines 10—21, and lines 31—38.</p>
<h5 id="httpsgistgithubcomwilbowmace7516a3219cd7d9a5bffile-before-rulesetcufwbeforerules"><a href="https://gist.github.com/wilbowma/ce7516a3219cd7d9a5bf#file-before-rules">/etc/ufw/before.rules</a></h5>
<div class="brush: sh">
<pre><code># /etc/ufw/before.rules
#
# Rules that should be run before the ufw command line added rules. Custom
# rules should be added to one of these chains:
# ufw-before-input
# ufw-before-output
# ufw-before-forward
#
#<<< Start of NAT table rules for VPN
*nat
:POSTROUTING ACCEPT [0:0]
# Forward all VPN traffic through internet device.
# Use this in a dynamic IP setting
# -A POSTROUTING -s 10.10.1.1/24 -o eth0 -j MASQUERADE
# Use this in a static IP setting
-A POSTROUTING -o eth0 -s 10.10.1.1/24 -j SNAT --to-source 192.155.88.116
# tell ufw to process the lines
COMMIT
#>>> End of VPN rules
# Don't delete these required lines, otherwise there will be errors
*filter
:ufw-before-input - [0:0]
:ufw-before-output - [0:0]
:ufw-before-forward - [0:0]
:ufw-not-local - [0:0]
# End required lines
#<<< Start of accept rules for VPN
-A ufw-before-input -s 10.10.1.1/24 -m state --state NEW -j ACCEPT
-A ufw-before-output -s 10.10.1.1/24 -m state --state NEW -j ACCEPT
-A ufw-before-forward -s 10.10.1.1/24 -m state --state NEW -j ACCEPT
# ufw already accepts input and output established connections; also
# accept forward
-A ufw-before-forward -m state --state ESTABLISHED,RELATED -j ACCEPT
#>>> End of VPN Rules
....
COMMIT</code></pre></div>
<p>This concludes the VPN setup. Your VPN should now be working with a local bridge. You may need to restart the VPN or the DHCP server for firewall settings to take affect.</p>
<h2 id="privoxy">Privoxy</h2>
<p>I decided to use <a href="http://www.privoxy.org/">Privoxy</a> as my ad-blocking proxy. It is lightweight and easy to use, provides advanced filtering abilities, enables compressing outgoing content, and provides transparent HTTP proxying. There even exist tools such as <a href="https://projects.zubr.me/wiki/adblock2privoxy">adblock2privoxy</a><sup><a href="#2015-12-22-a-transparent-ad-blocking-vpn-footnote-2-definition" name="2015-12-22-a-transparent-ad-blocking-vpn-footnote-2-return">2</a></sup> for converting Ad-Block Plus blocklists to Privoxy filter files. Privoxy does not support caching or transparent HTTPS proxying. As transparent HTTPS proxying introduces security concerns, I will consider this lack of support a feature for now and speculate about how to gain transparent HTTPS proxying later.</p>
<h3 id="installing-privoxy">Installing Privoxy</h3>
<p>Installing Privoxy is simple on Arch Linux: <code>yaourt -S privoxy</code>.</p>
<h3 id="configuring-privoxy">Configuring Privoxy</h3>
<p>Privoxy is easy to configure via the configuration file. To get started, set <code>listen-address</code> to your private IP and a port, then enable some <code>actionsfile</code>s and <code>filterfile</code>s. I assume you use port <code>8118</code>.</p>
<h5 id="httpsgistgithubcomwilbowmace7516a3219cd7d9a5bffile-configetcprivoxyconfig"><a href="https://gist.github.com/wilbowma/ce7516a3219cd7d9a5bf#file-config">/etc/privoxy/config</a></h5>
<div class="brush: nginx">
<pre><code># /etc/privoxy/config
...
listen-address $PRIVATE_IP:8118
actionsfile match-all.action
actionsfile default.action
actionsfile user.action
# actionsfile ab2p.system.action
# actionsfile ab2p.action
filterfile default.filter
filterfile user.filter
# filterfile ab2p.system.filter
# filterfile ab2p.filter
...</code></pre></div>
<p>There are some other useful options, such as <code>compression-level</code> and <code>enable-remote-toggle</code>. I add compression to save data on mobile devices, and enable remote toggle in case I find a website is broken by this setup. So far, I have not found any.</p>
<p>After Privoxy is configured, enable and start it:</p>
<div class="brush: sh">
<pre><code>sudo systemctl enable privoxy
sudo systemctl start privoxy</code></pre></div>
<p>At this point, Privoxy should be available as a proxy on your VPN. You can manually setup HTTP and HTTPS proxies while connected to your VPN to test it out.</p>
<h2 id="transparent-proxying">Transparent Proxying</h2>
<p>To block ads on all HTTP connections, including sneaky mobile ads, and to provide a better user experience, we will setup transparent proxying. All HTTP requires coming from the VPN will automagically be proxied through Privoxy.</p>
<p>First, we need one more <code>iptables</code> rule. This rule forwards all traffic from the VPN with destination port 80 to Privoxy, using dynamic NAT to handle multiplexing.</p>
<div class="brush: sh">
<pre><code>iptables -t nat -A PREROUTING -s 10.10.1.1/24 -p tcp -m multiport --dport 80 -j DNAT --to-destination $PRIVATE_IP:8118</code></pre></div>
<p>If you use <code>ufw</code> to manage your firewall, use the following diff for the <code>before.rules</code> file.</p>
<h5 id="etcufwbeforerules"><a href="">/etc/ufw/before.rules</a></h5>
<div class="brush: diff">
<pre><code>#<<< Start of NAT table rules for VPN
*nat
:POSTROUTING ACCEPT [0:0]
+ -A PREROUTING -s 10.10.1.1/24 -p tcp -m multiport --dport 80 -j DNAT --to-destination $PRIVATE_IP:8118
# Forward all VPN traffic through internet device.
# Use this in a dynamic IP setting
#</code></pre></div>
<p>This forwarding rule is fragile; it would be better to use <code>TPROXY</code>. However, Privoxy does not support <code>TPROXY</code>, so this will have to do. For more information, see <a href="https://superuser.com/questions/982053/squid-3-3-transparent-ipv4-and-ipv6-proxy-with-tproxy">this thread</a>, and <a href="https://www.kernel.org/doc/Documentation/networking/tproxy.txt">the documentation for TPROXY</a>.</p>
<p>Finally, we need to enable intercept proxying in Privoxy:</p>
<h5 id="etcprivoxyconfig"><code>/etc/privoxy/config</code></h5>
<div class="brush: nginx">
<pre><code>...
accept-intercepted-requests 1</code></pre></div>
<p>Now that all HTTP traffic is automagically forwarded to Privoxy, it would be wise to add Privoxy as a dependency to SoftEther.</p>
<h5 id="httpsgistgithubcomwilbowmace7516a3219cd7d9a5bffile-softethervpn-server-privoxy-confetcsystemdsystemsoftethervpn-serverservicedprivoxyconf"><a href="https://gist.github.com/wilbowma/ce7516a3219cd7d9a5bf#file-softethervpn-server-privoxy-conf">/etc/systemd/system/softethervpn-server.service.d/privoxy.conf</a></h5>
<div class="brush: ini">
<pre><code>[Unit]
Requires=privoxy.service</code></pre></div>
<h2 id="device-setup">Device Setup</h2>
<p>Now that the VPN is up and running, you can setup your device in the normal way. All HTTP traffic will be scrubbed of ads. However, you must manually connect the VPN when you want to use it. If you forget, you may end up seeing ads. If you also rely on this VPN for security, this is also a security risks. Worse still, iOS seems to disconnect the VPN from time to time when the phone has been idle for a while and not automagically reconnect. Instead, we would like the phone to automagically connect to the VPN before it tried to open any other network connections.</p>
<p>iOS features “On-Demand VPN” which solves these problems. When any network connection is initiated, the system will ensure the VPN is on, establishing a new connection if necessary. More advanced configuration is possible that will enable the VPN only for certain requests or on certain access points, if that is the desired behavior. However, I assume the VPN is always desired.</p>
<p>On-Demand VPN can only be configured by writing a <a href="https://developer.apple.com/library/ios/featuredarticles/iPhoneConfigurationProfileRef/Introduction/Introduction.html#//apple_ref/doc/uid/TP40010206-CH1-SW27">VPN payload</a> for a <a href="https://developer.apple.com/library/ios/featuredarticles/iPhoneConfigurationProfileRef/Introduction/Introduction.html">iOS configuration profile</a>. The documentation for writing these profiles is incomplete and sometimes wrong. The file I discuss below works as of iOS 9.2 and OS X 10.9.3.</p>
<p>A <code>.mobileconfig</code> file is an XML file with a MIME type <code>application/x-apple-aspen-config</code>. Each <code>.mobileconfig</code> is rooted at the <code><plist></code> tag, which contains a dictionary. This dictionary must define a <code>PayloadType</code> key whose value is exactly the string <code>Configuration</code>, and a <code>PayloadVersion</code> whose value is exactly the integer <code>1</code>. The dictionary must also define the keys <code>PayloadIdentifier</code> and <code>PayloadUUID</code>. Identifiers must be reverse DNS-style identifiers, and UUIDs can be arbitrary as long as they are globally unique. OS X has a tool <code>uuidgen</code> for generating UUIDs.</p>
<h5 id="httpsgistgithubcomwilbowmace7516a3219cd7d9a5bffile-vpn-mobileconfigvpnmobileconfig"><a href="https://gist.github.com/wilbowma/ce7516a3219cd7d9a5bf#file-vpn-mobileconfig">vpn.mobileconfig</a></h5>
<div class="brush: xml">
<pre><code><?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE plist PUBLIC "-//Apple//DTD PLIST 1.0//EN" "http://www.apple.com/DTDs/PropertyList-1.0.dtd">
<plist version="1.0">
<dict>
<key>PayloadIdentifier</key>
<string>com.example.config</string>
<key>PayloadUUID</key>
<string>....</string>
<key>PayloadType</key>
<string>Configuration</string>
<key>PayloadVersion</key>
<integer>1</integer>
....</code></pre></div>
<p>The important entry for the top-level dictionary is <code>PayloadContent</code>, which contains an array of dictionaries. Each of these dictionaries installs some payload on the device. We will create a payload for our VPN.</p>
<p>To declare that this payload configures a VPN, use the <code>PayloadType</code> <code>com.apple.vpn.managed</code>. You must again give a <code>PayloadVersion</code>, <code>PayloadIdentifier</code>, and <code>PayloadUUID</code>. In this payload, the version need not be <code>1</code>.</p>
<div class="brush: xml">
<pre><code>....
<key>PayloadContent</key>
<array>
<dict>
<key>PayloadType</key>
<string>com.apple.vpn.managed</string>
<key>PayloadVersion</key>
<integer>1</integer>
<key>PayloadIdentifier</key>
<string>com.example.config.vpn</string>
<key>PayloadUUID</key>
<string>....</string>
....</code></pre></div>
<p>We first declare that all traffic should be routed through the VPN. By default, iOS and OS X try to avoid using the VPN unless a connection fails. This default behavior prevents our goal of blocking ads on all traffic.</p>
<p>According to the documentation, we use the <code>OverridePrimary</code> key, which takes a boolean value, to force all traffic through the VPN. However, the documentation appears to be wrong on this point. Using this key does not change the default behavior. Instead, we appear to need to use the <code>IPv4</code> key, which takes a dictionary value. In this dictionary, we add give the <code>OverridePrimary</code> key an integer value <code>1</code> representing true.</p>
<p>We also declare the <code>ProviderType</code> to be <code>packet-tunnel</code>. This causes all traffic to tunnel through the VPN at the IP layer rather than the application layer.</p>
<div class="brush: xml">
<pre><code>....
<key>OverridePrimary</key>
<true/>
<key>IPv4</key>
<dict>
<key>OverridePrimary</key>
<integer>1</integer>
</dict>
<key>ProviderType</key>
<string>packet-tunnel</string>
....</code></pre></div>
<p>Next we declare the type of VPN and configure the authentication details. We are using a L2TP VPN. This VPN requires setting keys in two different dictionaries to configure authentication. In the <code>PPP</code> dictionary, we define the username and password, the address of the VPN, and disable <code>TokenCard</code> (an advanced authentication mechanism that we are not using).</p>
<div class="brush: xml">
<pre><code>....
<key>VPNType</key>
<string>L2TP</string>
<key>PPP</key>
<dict>
<key>AuthName</key>
<string>exampleusername</string>
<key>TokenCard</key>
<false/>
<key>AuthPassword</key>
<string>password</string>
<key>CommRemoteAddress</key>
<string>$PUBLIC_IP</string>
</dict>
....</code></pre></div>
<p>In the <code>IPsec</code> dictionary, we configure the shared secret. The shared secret must be base64 encoded. The documentation also tells us that we must set <code>LocalIdentiferType</code> to exactly the string <code>KeyID</code>.</p>
<div class="brush: xml">
<pre><code>
<key>IPSec</key>
<dict>
<key>AuthenticationMethod</key>
<string>SharedSecret</string>
<key>LocalIdentifierType</key>
<string>KeyID</string>
<key>SharedSecret</key>
<data>YS1zZWNyZXQ=</data>
....</code></pre></div>
<p>The <code>IPSec</code> dictionary also contains the VPN On-Demand keys. To enable VPN On-Demand, we set the key <code>OnDemandEnabled</code> to the integer <code>1</code>, representing true. Then we configure an array of rules for turning the VPN on or off. These rules are applied in the same order in which they appear in this array. These rules are applied any time a network change is detected, for instance, if the network is switched from WIFI to cellular, or if the card is reinitialized after sleeping. For a complete list, see the <a href="https://developer.apple.com/library/ios/featuredarticles/iPhoneConfigurationProfileRef/Introduction/Introduction.html">iOS configuration profile</a> section “On Demand Rules Dictionary Keys”. A separate set of rules can be used to toggle the VPN for specific network <em>connections</em>, such as a browser connection to a specific host or domain.</p>
<p>For our purposes, we only need 1 rule: unconditionally connect. This will ensure anytime any network card is initialized, we connect to the VPN before any connections are established.</p>
<div class="brush: xml">
<pre><code> ....
<key>OnDemandEnabled</key>
<integer>1</integer>
<key>OnDemandRules</key>
<array>
<dict>
<key>Action</key>
<string>Connect</string>
</dict>
</array>
</dict>
</array>
</dict>
</plist></code></pre></div>
<p>This file can now be sent via email and then simply clicked to install the VPN. Ensure your mail client has the correct MIME type for the attachment or iOS will not consider the file a configuration profile; I had to add a line to <code>~/.mime.types</code>.</p>
<h5 id="mimetypes"><code>~/.mime.types</code></h5>
<div class="brush: nginx">
<pre><code>application/x-apple-aspen-config mobileconfig</code></pre></div>
<h3 id="digression-https-proxy">Digression: HTTPS Proxy</h3>
<p>While we are writing a configuration profile, we can install a HTTPS proxy to be used while connected to the VPN. The VPN is setup to transparently proxy HTTP requests, but we cannot (safely, or with Privoxy) transparently proxy HTTPS requests. Setting up an HTTPS proxy manually will instruct browsers to make HTTPS connection requests through the proxy. This will not provide ad-blocking via filtering (inside the pages), and it is concievable that malicious apps could ignore the proxy settings. However, this may provide ad-blocking by blocking ad-serving domains/ip requested through the proxy, and may enable Privoxy to compress HTTPS pages. This can be installed by adding a <code>Proxies</code> dictionary at the same level at the <code>IPSec</code> dictionary in the VPN payload:</p>
<div class="brush: xml">
<pre><code> <key>Proxies</key>
<dict>
<key>HTTPSEnable</key>
<integer>1</integer>
<key>HTTPSProxy</key>
<string>$PRIVATE_IP</string>
<key>HTTPSPort</key>
<integer>8118</integer>
</dict></code></pre></div>
<h2 id="conclusion">Conclusion</h2>
<p>Now, all traffic should be tunneled to your VPN and ads should be blocked. You can use <a href="http://simple-adblock.com/faq/testing-your-adblocker/">this page</a> to test the ad blocking abilities. If you spend some time configuring <a href="https://projects.zubr.me/wiki/adblock2privoxy">adblock2privoxy</a>, you can convert <a href="https://adblockplus.org/">Ad-Block Plus</a> filters to Privoxy format, and get element hiding via Privoxy.</p>
<p>After this setup, you can even block ads for all your friends by handing out a <code>.mobileconfig</code> file, perhaps after first creating new username/passwords for each. In fact, if anyone is interested, I would be willing to host an ad-blocking VPN service that does not log or sell your traffic information, for a nominal monthly fee (there are hosting costs).</p>
<h2 id="addition-resoures">Addition Resoures</h2>
<p>I have totally ignored IPv6 in this article. Most of the configuration for IPv6 an be guessed from the IPv4 configuration I have described, but <a href="http://az.cokh.net/softether-vpn-server-on-a-nat-server/">this blog</a> may also serve as a useful resource.</p>
<p>Privoxy is a great tool, but is dedicated to doing one thing well. It is a great anonymizing and filtering proxy, but does not support features such as caching or transparent HTTPS. An alternative is ot use <a href="http://www.squid-cache.org/">Squid</a>, a advanced caching proxy. It supports the previously mentioned <code>TPROXY</code> kernel features (which is more robust than our <code>iptables</code> rules) can be configured for <a href="http://blog.davidvassallo.me/2011/03/22/squid-transparent-ssl-interception/">transparent HTTPS proxying</a>, and support advanced filtering such as <a href="http://www.initechsolutions.org/articles/compressing_proxy">compressing images in web pages</a>. I found several guides for setting up Squid in this way; <a href="http://thejimmahknows.com/network-adblocking-using-squid-squidguard-and-iptables">this one</a> seemed quite complete.</p>
<hr />
<p><sup><a href="#2015-12-22-a-transparent-ad-blocking-vpn-footnote-1-definition" name="2015-12-22-a-transparent-ad-blocking-vpn-footnote-1-return">1</a></sup> Recently at the time of writing; I started this post over a year ago, and only finally got things working the way I wanted.</p>
<p><sup><a href="#2015-12-22-a-transparent-ad-blocking-vpn-footnote-2-definition" name="2015-12-22-a-transparent-ad-blocking-vpn-footnote-2-return">2</a></sup> PL Bonus Point: adblock2privoxy is written in Haskell.</p>
<div class="footnotes">
<ol></ol></div>Setting up WebDAV, CalDAV, and CardDAV serversurn:https-www-williamjbowman-com:-blog-2015-07-24-setting-up-webdav-caldav-and-carddav-servers2015-07-24T23:46:24Z2015-07-24T23:46:24ZWilliam J. Bowman
<p>A while back I wrote a post about <a href="/blog/2014/04/06/to-be-or-not-to-be-paranoid/">paranoia</a> in which I was considering allowing Google or Apple to manage things like my calendar and contacts. Since then, I have reequipped my paranoia hat. This week I setup my own WebDAV, CalDAV, and CardDAV servers and secured them behind an <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html">nginx proxy</a> which provides SSL encryption and HTTP authentication.</p>
<!-- more-->
<h2 id="notes">Notes:</h2>
<p>This blog post was written with Radicale 2. I’ve since been UNABLE to update it to Radicale 3, as I cannot get the new rights management system working.</p>
<p>Radicale 2 contains a bug causing it to ignore <code>umask</code>, so you probably want a cronjob that fixes the permissions with <code>chown</code> and <code>chmod</code> on all your calendar files instead.</p>
<h2 id="comparing-servers">Comparing servers</h2>
<p>The most important feature I want in a server is the ability to actually serve the clients I use (shocking, I am sure). For CalDAV, my primary client is my iPhone. I access my reminders and calendar through my iPhone, so I need a CalDAV server that works with the iPhone. For CardDAV, my primary client is mutt, which I use to send most of my email. I need something I can integrate with mutt, but that will also work on my iPhone. For WebDAV, my primary client is <a href="https://itunes.apple.com/us/app/notability/id360593530?mt=8">Notability</a>, which uses WebDAV to backup my notes. I do not really use WebDAV for much else, and I have a separate setup I use to syncing assorted files, but maybe WebDAV will soon replace it.</p>
<p>The secondary features that I want in a server are simplicity and low resource uses. I want something that does one thing well, because that is just how I am. My VPS has few resources—1 core, 1 GB of RAM, and 24 GB of harddrive space—so I want something the uses little additional resources.</p>
<h3 id="webdav-">WebDAV: <a href="http://nginx.org/">nginx</a></h3>
<p>I settled on <a href="http://nginx.org/">nginx</a> extended with a <a href="https://github.com/arut/nginx-dav-ext-module/">secondary module</a> for my WebDAV server.</p>
<p>I already use <a href="http://nginx.org/">nginx</a> as my web server, so it has already passed my secondary considerations—<a href="http://nginx.org/">nginx</a> is relatively simple and makes efficient use of resources.</p>
<p>Unfortunately, <a href="http://nginx.org/">nginx</a>’s <a href="http://nginx.org/en/docs/http/ngx_http_dav_module.html">default WebDAV module</a> does not pass my first requirement—that it must work with my primary client. The built-in support for WebDAV in <a href="http://nginx.org/">nginx</a> is limited, perhaps because it already subscribes to the “do one thing well” philosophy. Thankfully, <a href="http://nginx.org/">nginx</a> is modular and <a href="https://github.com/arut">someone</a> has written a <a href="https://github.com/arut/nginx-dav-ext-module/">module</a> that provides the necessary extensions.</p>
<h3 id="cardcaldav-">{Card,Cal}DAV: <a href="http://radicale.org/">radicale</a></h3>
<p>I settled on <a href="http://radicale.org/">radicale</a> for the CalDAV and CardDAV servers.</p>
<p><a href="http://radicale.org/">Radicale</a> provides both a CalDAV and CardDAV server. The <a href="http://radicale.org/">radicale</a> project documents many of the client it supports, and the list includes the iPhone. After a little experimenting, I find it also support the <a href="http://lostpackets.de/pycarddav/pages/usage.html">pycarddav</a> client, a command line CardDAV client which can provide mutt with CardDAV support.</p>
<p><a href="http://radicale.org/">Radicale</a> is written in Python, which is already installed on my server. The entire server only takes up an extra .10MB of disk space. The <a href="http://radicale.org/">radicale</a> project explain it believes in the “do one thing well” philosophy, and the server is pretty simple to use and configure. It does not require complicated database back ends. Configuring the server is quite simple, and although it does provide unnecessary features like SSL and authentication support—which are unnecessary insofar as they are better provided by <a href="http://nginx.org/">nginx</a> acting as a proxy—it does so through existing Python modules and not new code.</p>
<h3 id="alternatives-considered">Alternatives considered</h3>
<p>I did not consider very many other WebDAV servers, since I already have <a href="http://nginx.org/">nginx</a> installed and respect the project a great deal. However, I did consider many other {Card,Cal}DAV servers. I will explain a little about why I did not like them.</p>
<h4 id=""><a href="http://www.davical.org/">DAViCal</a></h4>
<p><a href="http://www.davical.org/">DAViCal</a> seems to better support many CalDAV clients. It is under more active development and lots of documentation compared to <a href="http://radicale.org/">radicale</a>. Unlike <a href="http://radicale.org/">radicale</a>, the project is much more concerned with faithfully implementing CalDAV, and supporting lots of fancy features. <a href="http://radicale.org/">Radicale</a> is much more concerned with simplicity and supporting clients as they act in practice, and less concerned with the CalDAV protocol and advanced features.</p>
<p>However, <a href="http://www.davical.org/">DAViCal</a> requires PHP and PostgreSQL. I am opposed to PHP as a language, so that is one strike against it. I also do not have PHP or PostgreSQL installed, so DAViCal would increases the disk usage of my server by a lot.</p>
<h4 id=""><a href="http://baikal-server.com/">Baïkal</a></h4>
<p><a href="http://baikal-server.com/">Baïkal</a> is a very lightweight (2MB codebase) {Cal,Card}DAV server with slick web-based configuration. It seems to be under more active development compared to <a href="http://radicale.org/">radicale</a>. It supports all the clients I care about.</p>
<p>However, it requires PHP and MySQL, so I had to reject it for similar reasons to DAViCal.</p>
<h4 id=""><a href="http://sabre.io/">SabreDAV</a></h4>
<p><a href="http://sabre.io/">SabreDAV</a> is single server that provides WebDAV, CalDAV and CardDAV—among other—protocols. It seems to provide much better support for protocols than other servers. It has a plugin architecture with plugins for more advanced features. It even has a web based administration page, although less slick than <a href="http://baikal-server.com/">Baïkal</a>. This all makes it a great choice except…</p>
<p>It requires PHP. It does not even require a database, but I am still not willing to budge on this PHP thing.</p>
<h4 id=""><a href="https://owncloud.org/">ownCloud</a></h4>
<p><a href="https://owncloud.org/">ownCloud</a> is a very cool project. It seems to aim to give the average computer user the ability to setup their own “cloud”, complete with WebDAV, CalDAV, CardDAV, online videos, online PDF viewing, music sharing, among about 100 other features. It has very slick web interfaces and services.</p>
<p>It seems to support all the right clients, but it is an incredibly complex (large) project. It has tons of features I do not need or want. Therefore, I never looked into actually installing it. I am sure it requires at least a database.</p>
<h2 id="webdav-">WebDAV: <a href="http://nginx.org/">nginx</a></h2>
<h3 id="setting-up-the-server">Setting up the server</h3>
<p>You need <a href="http://nginx.org/">nginx</a> with two modules. The <a href="http://nginx.org/en/docs/http/ngx_http_dav_module.html">first module</a> is included in the <a href="http://nginx.org/">nginx</a> codebase. You can build support for it by compiling <a href="http://nginx.org/">nginx</a> with <code>--with-http_dav_module</code>. The second module, <a href="https://github.com/arut/nginx-dav-ext-module/">http_dav_ext_methods</a>, adds support for two important requests that clients seem to require. You have to build this module from source separately, and compile <a href="http://nginx.org/">nginx</a> with <code>--add-module=<path-to-module></code>.</p>
<p>After compiling <a href="http://nginx.org/">nginx</a>, you can easily enable WebDAV in <code>nginx.conf</code> using the following snippet. This snippet also enables SSL support and HTTP basic authentication. All WebDAV files are stored under <code>/www/webdav/data</code>. The <code>autoindex on</code> enables viewing all files in <code>/www/webdav/data</code> directory even from a web browser. This snippet serves the WebDAV server only on the <code>webdav.williamjbowman.com</code> subdomain. This can be quite handy for remembering the server address, although this can cause problems if your server uses <a href="https://en.wikipedia.org/wiki/HTTP_Strict_Transport_Security">HSTS</a> and your SSL certificate does not include the subdomain.</p>
<!-- TODO: Gist snippets-->
<h5 id="nginxconf"><code>nginx.conf</code></h5>
<div class="brush: nginx">
<pre><code>....
# WebDAV
server {
listen 443 ssl spdy;
server_name webdav.williamjbowman.com;
root /www/webdav;
auth_basic "Not currently available";
auth_basic_user_file /etc/nginx/htpasswd;
location / {
client_body_temp_path /webdav/tmp;
dav_methods PUT DELETE MKCOL COPY MOVE;
dav_ext_methods PROPFIND OPTIONS;
create_full_put_path on;
dav_access user:rw group:r;
autoindex on;
}
}</code></pre></div>
<p>You can also serve through a subdirectory instead of a subdomain:</p>
<!-- TODO: Gist snippets-->
<h5 id="nginxconf"><code>nginx.conf</code></h5>
<div class="brush: nginx">
<pre><code>....
# WebDAV
server {
listen 443 ssl spdy;
server_name williamjbowman.com www.williamjbowman.com;
....
location /webdav {
auth_basic "Not currently available";
auth_basic_user_file /etc/nginx/htpasswd;
client_body_temp_path /webdav/tmp;
dav_methods PUT DELETE MKCOL COPY MOVE;
dav_ext_methods PROPFIND OPTIONS;
create_full_put_path on;
dav_access user:rw group:r;
autoindex on;
}
}</code></pre></div>
<h3 id="setting-up-clients">Setting up clients</h3>
<p>You can easily view the files in a browser by simply going to, e.g., <code>https://webdav.williamjbowman.com/</code>. Of course, it requires authentication if you follow my snippet.</p>
<p>I also access WebDAV through <a href="http://docs.xfce.org/xfce/thunar/start">thunar</a>, my file manager, with the help of <a href="https://savannah.nongnu.org/projects/davfs2">davfs2</a>, which provides a FUSE filesystem for WebDAV. The only trick to this is <a href="http://docs.xfce.org/xfce/thunar/start">thunar</a> requires navigating to the completely intuitive URI <code>davs://<username>@webdav.williamjbowman.com/</code>.</p>
<p><a href="https://itunes.apple.com/us/app/notability/id360593530?mt=8">Notability</a> just requires giving the url, <code>https://webdav.williamjbowman.com</code>, the username, and password.</p>
<p>If you serve the WebDAV through a subdirectory rather than a subdomain, that works fine too. In this case, if using the example snippet above, the relevant addresses would be <code>https://williamjbowman.com/webdav</code> or <code>davs://<username>@williamjbowman.com/webdav</code>.</p>
<h2 id="caldav-and-carddav--via-">CalDAV and CardDAV: <a href="http://radicale.org/">radicale</a> via <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html">nginx proxy</a></h2>
<h3 id="setting-up-the-server">Setting up the server</h3>
<p>Installing <a href="http://radicale.org/">radicale</a> is quite simple. You can do so through your favorite package manage, e.g., <code>yaourt -S
radicale</code>, or through Python’s package manager, or by unzipping the package.</p>
<p>The configuration file for my <code>radicale</code> server is stored in <code>/etc/radicale/config</code> and all the files for the server live in <code>/srv/radicale/</code>.</p>
<p>My <a href="http://radicale.org/">radicale</a> server is configured with no rights management, no SSL, and no authentication, but only listens on <code>localhost</code>. The server is publically accessible with SSL and authentication through an <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html">nginx proxy</a>. The relevant configuration snippets are below.</p>
<h5 id="etcradicaleconfig"><code>/etc/radicale/config</code></h5>
<div class="brush: cfg">
<pre><code>[server]
hosts = 127.0.0.1:5232
pid = /run/radicale.pid
ssl = False
# This needs to change if served from a subdirectory instead of a
# subdomain
base_prefix = /
[encoding]
request = utf-8
stock = utf-8
[auth]
type = None
[rights]
type = None
[storage]
type = filesystem
filesystem_folder = /srv/radicale/collections</code></pre></div>
<p>The <a href="http://nginx.org/en/docs/http/ngx_http_proxy_module.html">nginx proxy</a> is simple to setup:</p>
<h5 id="nginxconf"><code>nginx.conf</code></h5>
<div class="brush: nginx">
<pre><code>....
# CalDAV and CardDAV
server {
listen 443 ssl spdy;
server_name caldav.williamjbowman.com carddav.williamjbowman.com;
auth_basic "Not currently available";
auth_basic_user_file /etc/nginx/caldav/htpasswd;
location / {
proxy_pass http://127.0.0.1:5232;
proxy_buffering on;
}
}</code></pre></div>
<h3 id="setting-up-clients">Setting up clients</h3>
<h4 id="iphone">iPhone</h4>
<p><a href="http://radicale.org/">Radicale</a> provides <a href="http://radicale.org/user_documentation/#idiphone-ipad">instructions for setting up the iPhone</a>, but I found that using a subdomain and a proxy simplified the procedure a bit, particularly for CardDAV. There was also one step I found necessary that is missing from the CardDAV instructions.</p>
<p>To setup CalDAV, simply go to the “Mail, Contacts, and Calendars” page under “Setting”, click “Add Account”, click “Other”, and click “Add CalDAV Account”. Enter the URL to the CalDAV server, followed by the username and a calendar name, then enter the username and password. For example, <code>https://caldav.williamjbowman/user/private.ics/</code>. The trailing slash is important, although the “https”, username, and calendar name seem less important. It seems to “just work” without them when using a subdomain, but not when using a subdirectory.</p>
<p>To setup CardDAV, I had to to manually create the address book on the server first: <code>touch /srv/radicale/user/contacts.vcf</code>. Then on your iPhone, go to “Mail, Contacts, and Calendars”, click “Add Account”, click “Other”, and click “Add CardDAV account”. Enter the URL, e.g., <code>carddav.williamjbowman.com</code>, the username, and the password. Things just seem to work after this, contrary to the <a href="http://radicale.org/">radicale</a> documentation.</p>
<h4 id=""><a href="http://lostpackets.de/pycarddav/pages/usage.html">pycarddav</a></h4>
<p><a href="http://lostpackets.de/pycarddav/pages/usage.html">pycarddav</a> provides pretty good documentation, but I want to point out that you need the <code>write_support</code> option set if you actually want to modify the address book locally and sync to CardDAV server. This was not obvious to me from the documentation (despite being quite well documented) and causes some strange errors. Obviously from the value you must use, this feature is dangerous and experimental, so do not use it. I also have to disable SSL verification because my SSL certificate does not include the <code>carddav</code> subdomain yet. However, you should never do this because this enables man-in-the-middle attacks.</p>
<h5 id="configpycardpycardconf"><code>~/.config/pycard/pycard.conf</code></h5>
<div class="brush: cfg">
<pre><code>[Account wjb]
user: user
# A shell command line to read the password.
passwd_cmd: gnome-keyring-query get user@carddav.williamjbowman.com
resource: https://carddav.williamjbowman.com/user/contacts.vcf/
# If verify is set to False, no SSL Certificate checks are done at all.
verify: False
auth: basic
write_support: YesPleaseIDoHaveABackupOfMyData</code></pre></div>
<h2 id="conclusion-and-future-work">Conclusion and Future Work</h2>
<p>Now you can replace Google or Apple and manage your contacts, calendars, and reminders yourself. In the future, I need to figure out how to encrypt all these on disk in such a way that data is only decrypted when a user tries to access them, and without storing a key or password on the server. I also plan to add some scripts to enables new features for reminders, like dependencies between reminders, and enable reminders from a particular list to become due randomly.</p>A much lower cellphone billurn:https-www-williamjbowman-com:-blog-2015-07-13-a-much-lower-cellphone-bill2015-07-14T01:13:02Z2015-07-14T01:13:02ZWilliam J. Bowman
<p>My monthly cellphone bill averages less than $10 per month. I have a smartphone. I have data, texting, and voice, and I use them. Let me tell you how I achieve such a low bill.</p>
<p><em>Disclaimer</em>: I will get a credit if you sign-up through my referral link, but so will you.</p>
<!-- more-->
<h2 id="introduction">Introduction</h2>
<p>I sacrifice a little bit of convenience for a dramatically lower bill. First, I don’t have a contract. I pay based on my usage. Second, I route all of my texting and voice through the internet, and completely disable the standard text and voice services from the cell service provider. Instead, I route both through my data plan. This means I never pay for using text or voice, I pay only for data usage. Lastly, I am nearly always connected to wifi, and I keep 4G disabled normally. This prevents me from using data much at all, keeping my data usage low.</p>
<p>I gain some conveniences a normal cell service doesn’t provide. I can make and receive calls and text for free while overseas as long as I’m on wifi. I have a spam filter for voice and texts, so I haven’t received a telemarketing call in ages. I can respond to texts from my computer or my phone, whichever is most convenient. My voicemails are transcribed so I can read them, when the machine learning algorithm does a good job.</p>
<p>I accomplish all this via a combination of <a href="https://zfl16e28t96.ting.com/" title="Ting Referral Link">Ting</a> and <a href="https://www.google.com/voice/" title="Google Voice">Google Voice</a>.</p>
<h2 id="ting">Ting</h2>
<p><a href="https://zfl16e28t96.ting.com/" title="Ting Referral Link">Ting</a> is my cell service provider. They use the Sprint network. They do not have contracts; instead they provide a tiered pay-per-use service. They do not have any hidden fees. Their customer service has been exceptional in my experience.</p>
<p><a href="https://zfl16e28t96.ting.com/" title="Ting Referral Link">Ting</a> also provides remarkable control over your devices. The following features are provided by default for no extra charge. You can setup call-forwarding per device. You can setup alerts and/or automatically disable services when you approach a certain usage. For instance, you can get an email and autoamtically disable data on your device—until manually re-enabled—just before your usage reaches the next tier. You can even disabled services such as incoming/outgoing voice, text, or data per device. You can change how your number is displayed in outgoing calls.</p>
<p>My <a href="https://zfl16e28t96.ting.com/" title="Ting Referral Link">Ting</a> device is setup to allow incoming and outgoing data, outgoing phone calls (for emergencies), and nothing else. I have an alert setup to notify me and disable data when I approach the second tier. I have never gone beyond the first tier usage, which costs $3 for up to 100MB of data in a month.</p>
<h2 id="google-voice">Google Voice</h2>
<p><a href="https://www.google.com/voice/" title="Google Voice">Google Voice</a> serves my phone number. <a href="https://www.google.com/voice/" title="Google Voice">Google Voice</a> will provide a new number for free in your choice of area code, or allow you to port your existing number for a one-time fee for $20. <a href="https://www.google.com/voice/" title="Google Voice">Google Voice</a> will allow you to setup call forwarding to multiple other phone numbers, or none. It allows you to make VOIP calls, and send and receive texts via Google Hangouts. It comes with a spam filter for voice and texts, which works as well as Gmail’s spam filter in my experience. It includes a voicemail transcription feature, and can email you the transcribed voicemail or send it to Google Hangouts, or both.</p>
<p>My <a href="https://www.google.com/voice/" title="Google Voice">Google Voice</a> setup sends texts, calls, and voicemails to Google Hangouts, which is installed on my iPhone. I never use the built-in texting or phone call features. I do not forward calls or texts to any other number.</p>
<h2 id="a-simple-how-to">A simple how-to</h2>
<p>To move to <a href="https://zfl16e28t96.ting.com/" title="Ting Referral Link">Ting</a> + <a href="https://www.google.com/voice/" title="Google Voice">Google Voice</a>, first you need a phone capable of being used on the Sprint network. You can use an unlocked iPhone, or many Android devices. On their website, <a href="https://zfl16e28t96.ting.com/" title="Ting Referral Link">Ting</a> has a <a href="https://ting.com/byod" title="Ting Devices">list</a> of devices that are compatible with their network, and the option to purchase a used or refurbished device through a third-party vendor.</p>
<p>Next, register for and port your existing number to <a href="https://www.google.com/voice/" title="Google Voice">Google Voice</a>. This may take some time. Opt-in to texting and voice through Google Hangouts, instead of through the out-dated Google Voice app. You may need to put some money on the account before it will allow outgoing calls. Calls within the US are free and you can refund the balance at anytime.</p>
<p>Then, sign up for <a href="https://zfl16e28t96.ting.com/" title="Ting Referral Link">Ting</a> using your device. If you follow my referral link, you may get a $25 credit. Allow them to assign you a new number. If you follow my setup, you won’t use this number.</p>
<p>On your <a href="https://zfl16e28t96.ting.com/" title="Ting Referral Link">Ting</a> device, disable texts completely and disable incoming voice. Consider setting up some alerts for your data usage. Change your the outgoing number displayed by your phone to you <a href="https://www.google.com/voice/" title="Google Voice">Google Voice</a> number.</p>
<p>Now install and setup Google Hangouts on your phone. Give it a test run.</p>
<p>Finally, leave your phone in airplane mode with wifi enabled. If you’re not near wifi and need to use data, disable airplane mode temporarily, but remember to enable it again when you are done.</p>An on demand Minecraft Serverurn:https-www-williamjbowman-com:-blog-2014-06-13-an-on-demand-minecraft-server2014-06-13T19:21:00Z2014-06-13T19:21:00ZWilliam J. Bowman
<p>Sometimes I play minecraft. Sometimes I play <em>a lot</em> of minecraft and sometimes I just stop playing for months. Lately when I do play, I’ve bene playing with a slightly modified version of <a href="http://www.technicpack.net/tekkit/">Tekkit</a> and running my own server. I have a VPS that I probably under use, so I decided to run the server there for when I do play with my friends.</p>
<p>My VPS is not very powerful, and running a Minecraft server when I stop playing for months is a huge waste of resources. I sought a way to automatically bring the server up when I wanted to play and shut it down when I wasn’t playing for a while.</p>
<!-- more-->
<h2 id="intro-and-credits">Intro and credits</h2>
<p>Most of this I gleemed by reading <a href="http://www.planetminecraft.com/blog/automatically-stop-a-server-when-nobodys-playing/">this article</a> at planetminecraft.com. It’s quite well written, but I made my own changes to suite my needs and expanded on some things. In particular, I wanted more abstraction, the website seems to mangle the bits of code posted there, and a couple of things were left unexplained.</p>
<p>I also heavily referenced <a href="http://wiki.vg/Server_List_Ping#Server_-.3E_Client">this wiki</a> to understand the minecraft ping protocol to get MOTD even when the server is down, letting users know the server is starting up.</p>
<p>All this code is available on <a href="https://github.com/bluephoenix47/tekkit-on-demand">github</a>.</p>
<p>Disclaimer: This code probably has at least one bug.</p>
<h2 id="assumptions">Assumptions</h2>
<p>A couple of notes before we get started. I run Arch Linux on all my machines, including my server. I use <code>cronie</code> as my crontab implementation. These scripts make use of lots of ‘standard’ tools such as a <code>screen</code>, <code>sed</code>, <code>grep</code>, and <code>tr</code>, and ‘less standard’ ones like <code>netcat</code>, <code>pgrep</code>, and <code>xinetd</code>. You don’t really need to understand them use this, but I won’t explain them here. I will assume you know how to run and install a Minecraft server. These scripts should work with any Minecraft server; I personally use Tekkit.</p>
<p>This article, and the scripts to a lesser extend, expect files to be in particular places. The only hard-coded path in the scripts should be <code>/etc/tekkit-on-demand/config.sh</code>. The article expects the binaries to be installed in <code>/usr/bin/tekkit-{start,idle}</code>, and the launch helper to be installed in <code>/etc/tekkit-on-demand/launch.sh</code></p>
<h2 id="the-server-overview">The server: overview</h2>
<p>The server runs in a <code>screen</code> session. I previously used <code>systemd</code> to manage the server, but there are a few advantages to running it in <code>screen</code>. You can easily, and programatically, send commands to the server, and more easily filter logs, which I find necessary due to the absurd number of info messages. The server is run as an unprivileged user, but requires <code>root</code> to launch it.</p>
<h3 id="configsh">config.sh</h3>
<p>The file <code>config.sh</code> contains all the configuration variables, and the functions with the core commands for starting and stopping the server, and detecting when the server is idle.</p>
<p>The file is configured by simply setting the variables at the top of the file. The variables have sensible defaults seen later in the file. We will refer to some of the variables such as <code>$SERVER_USER</code> in the rest of the guide.</p>
<p>Advanced configuration involved changing the functions <code>start</code>, <code>stop</code>, <code>idle</code>, and <code>debug</code>. The functions shouldn’t need to be changed unless your server is configured quite differently, or you want to avoid <code>screen</code>, <code>xinetd</code>, or some other vital piece explained in the rest of the guide.</p>
<div class="brush: sh">
<pre><code>#!/bin/sh
## Change these configuration variables. They should probably match server.properties
## Leave them blank if you think I'm a good guesser.
SERVER_ROOT=
SERVER_PROPERTIES=
LOCAL_PORT=
LOCAL_IP=
MINECRAFT_JAR=
MINECRAFT_LOG=
SESSION=
WAIT_TIME=
SERVER_USER=
LAUNCH=
START_LOCKFILE=
IDLE_LOCKFILE=
PLAYERS_FILE=
## NB: This default may not be sensible
JAVAOPTS=
JAVAOPTS=${JAVAOPTS:--Xmx2G -Xms1G -server -XX:+UseG1GC -XX:MaxGCPauseMillis=50 \
-XX:ParallelGCThreads=2 -XX:+DisableExplicitGC -XX:+AggressiveOpts -d64}
## TODO: Currenently not used. Need to recompute size and UTF-16BE
## encode the message, which is annoying
MESSAGE=
## Here be defaults
SERVER_ROOT=${SERVER_ROOT:-/srv/tekkit}
SERVER_PROPERTIES=${SERVER_PROPERTIES:-$SERVER_ROOT/server.properties}
LOCAL_PORT=${LOCAL_PORT:-$(sed -n 's/^server-port=\([0-9]*\)$/\1/p' ${SERVER_PROPERTIES})}
LOCAL_IP=${LOCAL_IP:-$(sed -n 's/^server-ip=\([0-9]*\)$/\1/p' ${SERVER_PROPERTIES})}
MINECRAFT_JAR=${MINECRAFT_JAR:-$SERVER_ROOT/Tekkit.jar}
MINECRAFT_LOG=${MINECRAFT_LOG:-$SERVER_ROOT/server.log}
SESSION=${SESSION:-Minecraft}
MESSAGE=${MESSAGE:-Just a moment please}
WAIT_TIME=${WAIT_TIME:-600}
SERVER_USER=${SERVER_USER:-tekkit}
LAUNCH=${LAUNCH:-/etc/tekkit-on-demand/launch.sh}
START_LOCKFILE=${START_LOCKFILE:-/tmp/startingtekkit}
IDLE_LOCKFILE=${IDLE_LOCKFILE:-/tmp/idleingtekkit}
PLAYERS_FILE=${PLAYERS_FILE:-/tmp/tekkitplayers}
...</code></pre></div>
<h2 id="starting-the-server">Starting the server</h2>
<p>Starting the server is tricky. We must ensure any user that tries to connect sees a message alerting them that the server is not up now, but will be shortly. We also need to be sure to start only one instance of the server. Finally, we have to route traffic between the server and the meta-server that is watching to start the server on-demand.</p>
<h3 id="start-on-demand">Start on-demand</h3>
<p>To automatically start the server, we use <code>xinetd</code>, a ‘super-server’ (or as I prefer, ‘meta-server’). The meta-server is a server that manages servers by binding to the server’s port, starting the server when a client attempts to connect, then forwarding all traffic to the server.</p>
<p><code>tekkit</code>:</p>
<div class="brush: xinetd">
<pre><code>service tekkit
{
type = UNLISTED
instances = 20
socket_type = stream
protocol = tcp
wait = no
user = root
group = root
server = /usr/bin/tekkit-start
port = 25565
disable = no
}</code></pre></div>
<p>We install this file in <code>/etc/xinetd.d/tekkit</code>, and have <code>xinetd</code> reread configuration files, for instance, via <code>systemctl reload xinetd</code>. This path is fixed by <code>xinetd</code>. You must change <code>port = ...</code> in this file if you change <code>$SERVER_PORT</code>.</p>
<p>Now when someone tries to connect to your server on port <code>25565</code>, the meta-server will run the file <code>/usr/bin/tekkit-start</code>. Note that since the meta-server is binding port <code>25565</code>, your server must use a different port. I use port <code>25555</code>, but you can configure this with <code>$LOCAL_PORT</code>.</p>
<h3 id="screen-and-the-server-command">Screen and the server command</h3>
<p>A typical Minecraft server is started with a command that looks something like <code>/usr/bin/java $JAVAOPTS -jar $MINECRAFT_JAR nogui</code>. We add to this command a filter to filter out the INFO messages, and to capture the number of players. All output is first piped to a <code>sed</code> script that watches for the response to a <code>list</code> command. The <code>list</code> command is a Minecraft server command that lists the number of players online. The number of players is captured to the file specified by <code>$PLAYERS_FILE</code>. The remaining output is filtered through <code>grep</code> to discard INFO messages. This is done in <code>config.sh</code> in the <code>start</code> function:</p>
<div class="brush: sh">
<pre><code>start() {
/usr/bin/java $JAVAOPTS -jar $MINECRAFT_JAR nogui 2>&1 \
| sed -n -e 's/^.*There are \([0-9]*\)\/[0-9] players.*$/\1/' -e 't M' -e 'b' -e ": M w $PLAYERS_FILE" -e 'd' \
| grep -v -e "INFO" -e "Can't keep up"
}</code></pre></div>
<p>We want to run the server in <code>screen</code> to allow issuing commands, such as <code>list</code>, to the server. Unfortunately, <code>screen</code> doesn’t appear to take a function as an argument. We use <code>launch.sh</code> as a wrapper, and have <code>screen</code> run <code>launch.sh</code> as an unprivileged user called <code>$SERVER_USER</code>.</p>
<p><code>launch.sh</code>: <code>sh
#!/bin/sh
source /etc/tekkit-on-demand/config.sh
cd $SERVER_ROOT
start</code></p>
<p>The file <code>/usr/bin/tekkit-start</code> is actually responsible for starting the server, and the <code>screen</code> command appears in there. However, much more happens before starting the server…</p>
<h3 id="server-starting-message">Server starting message</h3>
<p>When a player first connects, we do not want them scared away by a “Can’t reach server” message. We implement the minecraft server list ping response, details <a href="http://wiki.vg/Server_List_Ping#Server_-.3E_Client">here</a>, to give them a less scary message. This protocol is implemented in the function <code>sign</code> in the <code>tekkit-start</code> file.</p>
<p><code>tekkit-start</code>:</p>
<div class="brush: sh">
<pre><code>#!/bin/sh
source /etc/tekkit-on-demand/config.sh
sign(){
# Kick protocol start
echo -en "\xFF"
# Length in characters: (including protocol, MOTD, current, max players)
# 22
# |
echo -en "\x00\x22"
# UTF-16BE String: Protocol header
echo -en "\x00\xA7\x00\x31\x00\x00"
# Protocol version:
# 4 7
# | |
echo -en "\x00\x34\x00\x37\x00\x00"
# Minecraft version:
# 1 . 6 . 4
# | | |
echo -en "\x00\x31\x00\x2E\x00\x36\x00\x2E\x00\x34\x00\x00"
# MOTD: "Up in just a sec.."
echo -en "\x00\x55\x00\x70\x00\x20\x00\x69\x00\x6E\x00\x20\x00\x6A\x00\x75\x00\x73\x00\x74\x00\x20\x00\x61\x00\x20\x00\x73\x00\x65\x00\x63\x00\x2E\x00\x2E\x00\x00"
# Current Players:
# 0
# |
echo -en "\x00\x30\x00\x00"
# Max Players:
# 0
# |
echo -en "\x00\x30"
}</code></pre></div>
<p>This implementation is kind of bad, with lengths computed and strings encoded by hand. Maybe I’ll fix it later. The comment above each string explain what the string means. The first two <code>echo</code>s send binary strings representing the protocol start packet and the length of the message. The remaining <code>echo</code>s send UTF16-BE encoded information, such as the minecraft version, the MOTD (the message displayed under the server name), and the number of players.</p>
<h3 id="control-flow">Control Flow</h3>
<p>The rest of the <code>tekkit-start</code> file is dedicated to control flow. We must ensure only one instance of the server is started, so we use <code>pgrep</code> to ask if the <code>$SERVER_USER</code> user has any process using the <code>$MINECRAFT_JAR</code>. If the server is not running, we start the server, post ping response, and wait for the server to start responding. If the server is already up, we use <code>nc</code> to route traffic between the server and meta-server.</p>
<p>To ensure every user continues to see the ping response while the server is starting but not yet responding, we add a <code>$START_LOCKFILE</code> While the <code>$START_LOCKFILE</code> exists, the only thing <code>tekkit-start</code> will do is post the ping response.</p>
<div class="brush: sh">
<pre><code>...
if [ ! -f $START_LOCKFILE ]; then
touch $START_LOCKFILE
if ! pgrep -U $SERVER_USER -f "$MINECRAFT_JAR" >/dev/null; then
sudo -u $SERVER_USER -- screen -dmS $SESSION $LAUNCH
sign
while netcat -vz -w 1 localhost 25555 2>&1 | grep refused > /dev/null; do
debug "Connection refused"
sleep 1
done
debug "Deleting start lock"
/bin/rm $START_LOCKFILE
debug `[ -f $START_LOCKFILE ] && echo "Lockfile still exists"`
else
/bin/rm $START_LOCKFILE
debug `[ -f $START_LOCKFILE ] && echo "Lockfile still exists"`
exec sudo -u $SERVER_USER nc $LOCAL_IP $LOCAL_PORT
fi
else
sign
fi</code></pre></div>
<h2 id="stopping-the-server">Stopping the server</h2>
<p>Stopping the server is easier than starting it. We want to stop the server when there have been no players online for some amount of time. However, we want to make sure not to stop too frequently, since a player may stop briefly to make food, do some work, or just get away from the computers for a little bit. Once we know the server is idle, we just stop it by issuing a <code>stop</code> command to the server.</p>
<div class="brush: sh">
<pre><code>stop() {
screen -S $SESSION -p 0 -X stuff 'stop\15'
debug "Shit's going down"
}</code></pre></div>
<p>This commands tells <code>screen</code> to connect to the <code>$SESSION</code> session, on window <code>0</code>, and <code>stuff</code> the string <code>stop\15</code> into the input buffer. The command <code>stop</code> tells the server stop running. The final character <code>\15</code> is the control character for enter/return, so this simulates typing <code>stop</code> and pressing enter.</p>
<h3 id="detecting-an-idle-server">Detecting an idle server</h3>
<p>We specify how frequently to perform an idle check using <code>crontab</code>. If there is no one online during the check, the script will wait <code>$WAIT_TIME</code> seconds and check again. If both checks pass then the server will shutdown.</p>
<p>We add the following to <code>$SERVER_USER</code>’s crontab to run the idle check once an hour.</p>
<p><code>crontab -e</code>:</p>
<div class="brush: crontab">
<pre><code>@hourly /usr/bin/tekkit-idle</code></pre></div>
<p>To determine the number of users online via script, we have all logs filtered through the <code>sed</code> script seen in <code>start()</code>. The script looks for a particular server message, and dumps the number to the file <code>$PLAYERS_FILE</code>.</p>
<p>We can force the server to output this message by using the <code>list</code> command. Since the server is running in a <code>screen</code> process, we can issue this command via <code>screen -S Minecraft -p 0 -X stuff 'list\15'</code>. The command <code>list</code> asks the server to dump the current number of players.</p>
<p>Before issuing the requre, we clear the file. After issuing the request, we wait until the file is not blank, so <code>sed</code> must have found the message and dumped it to the file. This prevents race conditions. We read in and compare to 0. All this logic is implemented in the <code>config.sh</code> function <code>idle</code>. There is also a bunch of debugging information there, because I had trouble with <code>sed</code> outputing invisible characters to the <code>$PLAYERS_FILE</code>. We use <code>tr -d [:cntrl:]</code> to remove these invisible control characters.</p>
<div class="brush: sh">
<pre><code>idle() {
echo -n "" > ${PLAYERS_FILE}
debug `cat ${PLAYERS_FILE}`
screen -S $SESSION -p 0 -X stuff 'list\15'
players=`tail -n 1 ${PLAYERS_FILE} | tr -d [:cntrl:]`
while [ -z ${players} ]; do
sleep 1
players=`tail -n 1 ${PLAYERS_FILE} | tr -d [:cntrl:]`
done
debug "There are ${players} players"
if [ "0" = "${players}" ]; then
debug "Idle"
true
else
debug "Not idle"
false
fi
}</code></pre></div>
<p>Below is the idle detection script, called <code>tekkit-idle</code>. The function <code>idle</code> is implemented in <code>config.sh</code> and returns true when no players are online. The reset of the script implements the logic I explained before: if the server is idle, i.e., no one is online, then wait <code>$WAIT_TIME</code> seconds. If the server is still idle, shut it down.</p>
<p><code>tekkit-idle</code>:</p>
<div class="brush: sh">
<pre><code>#!/bin/sh
source /etc/tekkit-on-demand/config.sh
if [ ! -f $IDLE_LOCKFILE ]; then
touch $IDLE_LOCKFILE
debug "No lock file, checking!"
if idle; then
debug "Idle, waiting!..."
sleep $WAIT_TIME
if idle; then
debug "Still idle, stopping!"
stop
fi
fi
/bin/rm $IDLE_LOCKFILE
fi
debug "Idle check complete"</code></pre></div>Use Physical Windows Partition as VirtualBox Guest for Linux Hosturn:https-www-williamjbowman-com:-blog-2012-08-27-use-physical-windows-partition-as-virtualbox-guest-for-linux-host2012-08-28T02:35:32Z2012-08-28T02:35:32ZWilliam J. Bowman
<p>I have a Windows partition on my machine, because sometimes there are things wine can’t handle, and sometimes I need more performance than VirtualBox can handle.</p>
<!-- more-->
<p>However, I didn’t like keeping around a Windows VM AND my Windows partition. So I go to thinking, well, if a virtual disk is a file, and a regular disk is a file, I wonder if I can trick VirtualBox into using the real disk as it’s virtual disk, and just mount the partition under VirtualBox, while still letting me boot to that partition when I need the performance of a real machien.</p>
<p>Turns out you can. Someone else has already done a great job of documenting this process, so I’ll just link you to <a href="http://blog.amhill.net/2010/01/27/linux-ftw-using-virtualbox-with-an-existing-windows-partition/">it</a>. I also found <a href="http://www.virtualbox.org/manual/ch09.html#rawdisk">this resource</a> helpful.</p>Overclocking your ATI Card [Linux]urn:https-www-williamjbowman-com:-blog-2012-08-27-overclocking-your-ati-card-linux2012-08-28T02:26:15Z2012-08-28T02:26:15ZWilliam J. Bowman
<p>So recently I’ve been getting all my games to run under linux. As part of this process, I’m learning all about my ATI drivers, because graphics drivers are universally terrible. However, under linux, you can tinker more freely to make them (slightly) less terrible.</p>
<!-- more-->
<p>This particular hackery came as I was researching how to fix an ‘ASIC hang happened’ issue. I haven’t yet figured that out, but I did find how to over/under clock my graphics card, and stress test it, using the proprietary ATI drivers.</p>
<p>DISCLAIMER: Overclocking your machine is dangerous. Don’t do it.</p>
<p>So, to overclock your card, here are a few helpful commands, and what they do:</p>
<ul>
<li>
<p><code>aticonfig --adapter 0 --od-getclocks</code> List adapter 0’s current clock information, including peak ranges. See <code>aticonfig --list-adapters</code> to figure out which adapter you want to specify.</p></li>
<li>
<p><code>aticonfig --adapter 0 --od-setclocks=900,1150</code> Set adapter 0’s core clock to 900, and memory clock to 1150. aticonfig might warn you about how dangerous this is and request you set a flag to enable overclocking.</p></li>
<li>
<p><code>atiode -P 600 -h $DISPLAY; echo $?</code> Run the ATI stress testing tool for 600 seconds (10 minutes) on the current X display. Print out the return value after it’s done. See aticonfig —help for return value meanings.</p></li>
<li>
<p><code>while sleep 5; do aticonfig --adapter 0 --od-gettemperature --od-getclocks > atiode.log; done</code> Log temperatures and clocks to atiode.log; to make sure your GPU isn’t overheating during the stress test.</p></li>
<li>
<p><code>aticonfig --od-commitclocks</code> I’m not really sure, but the —help suggests you should do this after running the stress tests.</p></li></ul>
<p>Now you know how to over/under-clock your card, and stress test it. Have fun.</p>
<p>Purely speculatively, if you get this “ASIC hang happened” issue, you might try underclocking the card. I haven’t tested this thoroughly enough to tell if it will help, but something I read in the overclock warnings suggested it might help</p>Modding Minecrafturn:https-www-williamjbowman-com:-blog-2012-08-12-modding-minecraft2012-08-13T00:18:58Z2012-08-13T00:18:58ZWilliam J. Bowman
<p>I like minecraft, a lot, on occassion. But it needs a few tweaks for me to really get into it. I’m going to document them now:</p>
<!-- more-->
<p>First, you need mcpatcher, so we can use HD texture packs and install mods. Get it here: <a href="https://github.com/pclewis/mcpatcher/downloads">https://github.com/pclewis/mcpatcher/downloads</a></p>
<p>I’m using linux, so I need the .jar file. You can run it via</p>
<div class="brush: bash">
<pre><code>java -jar ./mcpatch-2.4.1_02.jar</code></pre></div>
<p>This should pop up a handy GUI. More on that in a minute.</p>
<p>Next, we need the appropriate mods:</p>
<ul>
<li>
<p>Auto switcher: automatically switch to the best tool for the job; the right type of pick for mining, axe for cutting, etc.
<br /> <a href="http://www.minecraftforum.net/topic/753030-131125-thebombzens-mods-now-with-autoswitch-2/">http://www.minecraftforum.net/topic/753030–131125-thebombzens-mods-now-with-autoswitch–2/</a></p></li>
<li>
<p>OptiFine: Minecraft is full of performance issues. OptiFine helps smooth them out, and supports better graphics for those with reasonable computers. <a href="http://www.minecraftforum.net/topic/249637-131-optifine-hd-b1-fps-boost-hd-textures-aa-af-and-much-more/">http://www.minecraftforum.net/topic/249637–131-optifine-hd-b1-fps-boost-hd-textures-aa-af-and-much-more/</a></p></li>
<li>
<p>ModLoader: I don’t remember why I have this installed, but presumably it’s for good reason.
<br /> <a href="http://www.minecraftforum.net/topic/75440-v131-risugamis-mods-preliminary-updates/">http://www.minecraftforum.net/topic/75440-v131-risugamis-mods-preliminary-updates/</a></p></li></ul>
<p>So now that you have all the appropriate mods downloaded, launch mcpatcher. And add a bunch of mods (Mods -> Add in the menu). Also point it at your minecraft.jar. Should be located in ~/.minecraft/bin/.</p>
<p>Now, ensure only the following mods are checked, and ensure they appear in this order. If they don’t, use the arrows at the bottom of the GUI to rearrange them:</p>
<ol>
<li>
<p>AutoSwitchMod</p></li>
<li>
<p>ModLoader</p></li>
<li>
<p>OptiFine</p></li>
<li>
<p>Custom Colors</p></li>
<li>
<p>Random Mobs</p></li>
<li>
<p>Connected Textures</p></li>
<li>
<p>Better Skies</p></li></ol>
<p>Then click patch. mcpatcher will probably complain, but just click yes.</p>
<p>Lastly, we need a good texture pack. I like this one: <a href="http://bdcraft.net/download-purebdcraft-texturepack-for-minecraft">http://bdcraft.net/download-purebdcraft-texturepack-for-minecraft</a>, 128 bit. Much higher and Minecraft doesn’t seem to handle itself very well. Download it and save the .zip to ~/.minecraft/texturepacks. You should be able to select it from the in-game menu.</p>Using Evil for Goodurn:https-www-williamjbowman-com:-blog-2012-07-26-using-evil-for-good2012-07-26T20:33:03Z2012-07-26T20:33:03ZWilliam J. Bowman
<p>So I use Vim as my primary editor. Unfortunately, some applications I require (e.g. <a href="http://proofgeneral.inf.ed.ac.uk/">Proof General</a>) run only on the Emacs operation system, which comes with a terrible editor. Thankfully, I’ve found a pretty decent port of Vim to Emacs, called (appropriately) Evil.</p>
<!-- more-->
<p>Setting up Evil is quite easy:</p>
<ol>
<li>Download the evil pacakge to <code>~/.emacs.d/evil</code>:</li></ol>
<div class="brush: bash">
<pre><code>cd ~/.emacs.d/
git clone git://gitorious.org/evil/evil.git evil</code></pre></div>
<ol>
<li>Next, add a few lines to your .emacs file:</li></ol>
<div class="brush: elisp">
<pre><code>(add-to-list 'load-path "~/.emacs.d/evil/")
(require 'evil)
(evil-mode 1)</code></pre></div>
<ol>
<li>Now, customize. Evil doesn’t have ALL of Vim’s shortcuts by default. I’m not sure why. Here is what I added to .emacs for evil. It includes 2 other emacs plugins: evil-surround, which is a port of the vim surrounds plugin (necessary), and undo-tree, which provides fancy undo commands.</li></ol>
<div class="brush: elisp">
<pre><code> (add-to-list 'load-path "~/.emacs.d/evil/")
(add-to-list 'load-path "~/.emacs.d/evil-surround/")
(add-to-list 'load-path "~/.emacs.d/undo-tree")
;; Evil settings
(setq evil-shift-width 2)
(require 'evil)
(evil-mode 1)
(require 'surround)
(global-surround-mode 1)
(evil-ex-define-cmd "!" 'shell-command)
;;(evil-define-key 'normal proof-mode-map (kbd "M-v") 'proof-goto-point)
(evil-define-key 'normal proof-mode-map (kbd "C-c RET") 'proof-goto-point)
;;; esc quits
(define-key evil-normal-state-map [escape] 'keyboard-quit)
(define-key evil-normal-state-map (kbd "C-u") 'keyboard-quit)
(define-key evil-visual-state-map [escape] 'keyboard-quit)
(define-key minibuffer-local-map [escape] 'minibuffer-keyboard-quit)
(define-key minibuffer-local-ns-map [escape] 'minibuffer-keyboard-quit)
(define-key minibuffer-local-completion-map [escape]
'minibuffer-keyboard-quit)
(define-key minibuffer-local-must-match-map [escape]
'minibuffer-keyboard-quit)
(define-key minibuffer-local-isearch-map [escape]
'minibuffer-keyboard-quit)
;; Other things
(define-key evil-normal-state-map "Y" 'copy-to-end-of-line)
(global-set-key (kbd "RET") 'newline-and-indent)
;; coq
(load-file "/usr/share/emacs/site-lisp/ProofGeneral/generic/proof-site.el")
;;(global-set-key (kbd "C-c RET") 'proof-goto-point)
(evil-ex-define-cmd "[pr]prove" 'proof-goto-point)</code></pre></div>
<p>Here’s a <a href="http://stackoverflow.com/questions/8483182/emacs-evil-mode-best-practice">link</a> to someone’s very large Evil configuration file.</p>Command line trash/recycle scripturn:https-www-williamjbowman-com:-blog-2012-05-31-command-line-trash-recycle-script2012-05-31T08:17:41Z2012-05-31T08:17:41ZWilliam J. Bowman
<p>A while back, I got really sick of sometimes accidentally rm-ing a file. I thought “Woe is me, if only I had a command that, instead, hid the file away from me, in a place I knew of but didn’t really thing about, so I could recover it if I wanted it.”.</p>
<!-- more-->
<p>I googled around for a bit and the best I found was a terrible approximation at a script to move files into a Trash folder. It didn’t work when there were spaces or funny symbols, it didn’t have exactly the same interface as rm, making it hard to just alias to rm, and it didn’t handle error very well.</p>
<p>And honestly, I’ve come to think I don’t even want that. What I want is a script that just renames the file, to something like .filename.timestamp.trash. Now it’s hidden, and in the same convenient place if I need to recover it. I could even have multiple versions in the same place! Sure, maybe things get cluttered, but a single</p>
<div class="brush: bash">
<pre><code>find ~/ iname="*.trash" -exec /bin/rm -f {} \; </code></pre></div>
<p> cleared them all out. Alias that to ‘empty’ and I’d be set!</p>
<p>But woe is me, no such script exists… oh, until about 10 minutes ago. I finally got around to writing it. 400 lines of beautiful bash! The same interface as rm, even with the ability to move and symlink files to a .trash folder, in case you don’t agree with me that they should just stay in the same damn folder. Handle filenames with spaces and funny characters, handles globs, did I mention same interface as rm?</p>
<p>I may get around to writing this in a better language, and maybe adding some things like a recover command that will display all the versions of the file. Maybe I’ll even add more documentation (I mean, it’s got a pretty thorough —help but…).</p>
<p>For now, here’s a github: <a href="https://github.com/bluephoenix47/trash.sh">https://github.com/bluephoenix47/trash.sh</a></p>Version controlled configsurn:https-www-williamjbowman-com:-blog-2012-05-30-version-controlled-configs2012-05-31T01:22:11Z2012-05-31T01:22:11ZWilliam J. Bowman
<p>Disclaimer: I typed this is a hurry and haven’t proof-read it, or tried running any of the code (except the script at the end).</p>
<p>So a while ago I decided to use git to track all my dot-files and other assorted configuration stuff that each of my linux systems need. I’m going to try to outline how I did this:</p>
<!-- more-->
<p>First, I have some files that exist on all my machines. These I’m going to call <code>common</code> files. Things like <code>muttrc</code>, irssi config scripts, etc. These are also mostly configuration I don’t mind making public.</p>
<p>Second, I have some stuff that are common, but need to be private. I call these <code>private</code> files.</p>
<p>Lastly, I have files that are specific to a machine, but I still want tracked. Things like <code>.xinitrc</code> and <code>.profile</code> are in here.</p>
<p>First thing we did is create some directories:</p>
<div class="brush: bash">
<pre><code> mkdir -p .config.git/common
mkdir -p .config.git/private</code></pre></div>
<p>So, all the common configuration stuff will go in <code>../common</code>, the ssh/gpg/other private things will go in <code>../private</code>, and the machine specific stuff will just go in <code>.config.git</code>.</p>
<p>Next we create some repos.</p>
<div class="brush: bash">
<pre><code> git init .config.git/common
git init .config.git/private
git init .config.git</code></pre></div>
<p>Excellent. Be careful with this next step, you’re about to not have any configuration.</p>
<div class="brush: bash">
<pre><code> cd $HOME
mv .bashrc
mv *rc .config.git/common
mv .ssh .config.git/private
mv .gnupg .config.git/private
mv .profile .config.git/
mv .config.git/common/.xinitrc .config.git/</code></pre></div>
<p>Now we want to link all the configurations together using submodules. Git allows you to add one repository as a kind of dependency of another. This is called a submodule. You do this as follows:</p>
<div class="brush: bash">
<pre><code> cd .config.git
git submodule add $HOME/.config.git/common
git submodule add $HOME/.config.git/private</code></pre></div>
<p>Alternatively, if you want to push these to your server first:</p>
<div class="brush: bash">
<pre><code> cd $HOME/.config.git/common
git add *
git commit -a -m "Init"
ssh user@myserver.com git init --bare ~/repos/config.common
git remote add origin myserver.com:~/repos/config.common
git push origin master
cd $HOME/.config.git/private
git add *
git commit -a -m "init"
ssh user@myserver.com git init --bare ~/repos/config.private
git remote add origin myserver.com:~/repos/config.private
git push origin master
cd $HOME/.config.git/
git add .profile
git add .xinitrc
git submodule add user@myserver.com:~/repos/config.common
git submodule add user@myserver.com:~/repos/config.private
git commit -a -m "init" ssh user@myserver.com
git init --bare ~/repos/config.machine-name
git remote add origin myserver.com:~/repos/config.machine-name
git push origin master</code></pre></div>
<p>Now, you need to link everything back into your home so you can actually use your version controlled configs. I use a script to do this:</p>
<div class="brush: bash">
<pre><code> #/bin/bash
# A script for creating symlinks to all the dot files stored in the
# config.git repo.
CONFIG_PATHS=${CONFIG_PATHS:-"~/.config.git/common/ ~/.config.git/private ~/.config.git/"}
EXCLUDE=${EXCLUDE:-"$1 bin/lndot.sh common/? private/? .config.git/? .git"}
# Make sure to exclude ., .., and any other files listed in $EXCLUDE
FIND_CMD="find $CONFIG_PATHS -maxdepth 1 -iname \"*\" "
for file in $EXCLUDE
do
FIND_CMD=$FIND_CMD"| egrep -v \"$file\$\" "
done
echo $FIND_CMD
# Ensure we're in home, to make proper symlinks
pushd ~/ >/dev/null
for i in $(eval $FIND_CMD)
do
ln -f -s $i ~/
done
popd >/dev/null</code></pre></div>
<p>Now, you have your configuration files version controlled, linked properly, and easily shareable! To add a new machine to this setup, all you need to do is:</p>
<div class="brush: bash">
<pre><code> git init .config.git
cd .config.git
git submodule add user@server.com:~/repos/config.common
git submodule add user@server.com:~/repos/config.private
ssh user@myserver.com git init --bare ~/repos/config.machine-name2
mv ~/.xinitrc .
mv ~/.profile .
git add .xinitrc
git add .profile
... etc</code></pre></div>
<p>One important thing to note: whenever you change a file from a submodule, you should perform a commit/push of the parent module. Otherwise, when cloning the parent module, it will pull an older revision of the submodule.</p>
<p>The final piece of this setup, for me, was learning how to use bash configuration files so I could have both common and machine specific files coexist, and have settings that exist for all shells, or only interactive/login shells. I might make another post about this later.</p>