QNAP Mounting RAID6 after reboot and "DISK MISSING"

Quick version State: When QNAP boots into the initialization dialog (the one you see when you first setup your QNAP) and then switches to "DISK MISSING" screen but SSH can still be used to connect to it (using the old credentials *strange*) Goal: Mount your RAID and backup all data! Solution: # assemble your RAID Continue reading →

Set a custom resolution with xrandr

Just a note for me trying to get a screen working with its native resolution when xrandr refuses to auto-detect it: xrandr --newmode "1680x1050_custom" 146.25 1680 1784 1960 2240 1050 1053 1059 1089 -hsync +vsync xrandr --addmode VGA1 1680x1050_custom xrandr --output VGA1 --mode 1680x1050_custom --above LVDS1xrandr --newmode "1680x1050_custom" 146.25 1680 1784 1960 2240 1050 1053 Continue reading →

Downloading all files from a Amazon S3 bucket

I was trying to download all files from an Amazon S3 bucket, and did not feel like clicking through all of the files. Here is the little Python(2) script I came up with: import urllib2   print("Retrieving file list ...") url = urllib2.urlopen('https://s3.amazonaws.com/tripdata?max-keys=9999999') data = url.read() url.close()   print("Parsing file list ...") import xml.etree.ElementTree e Continue reading →

NFS Automount / Autofs Timeout

Recently, we had an issue where on one machine, a single NFS mountpoint was not mountable (in our case via automount/autofs). Other mountpoints from the same server worked and the same mountpoint worked on other servers. So a very mysterious issue indeed. It turned out that this was caused by a server problem, i.e., the Continue reading →

Backing up (MySQL Database) via LVM Snapshot

I have recently come across a nice solution for backing up ("large" MySQL databases without having to handle a slave explicitly for dumps). That is, if a short outage is fine (stopping the slave for a few seconds). We maintain a hot swap slave (read-only) which will jump in for the master if it fails. Continue reading →

Cloudera, Spark and MySQL

I am using a Cloudera Cluster (CDH-5.4.2-1.cdh5.4.2.p0.2) to run Spark (1.3.0). I wanted to access data from an MySQL database: val photos = sqlContext.load( "jdbc", Map( "driver" -> "com.mysql.jdbc.Driver", "url" -> "jdbc:mysql://testserver:3306/test?user=tester&password=testing", "dbtable" -> "photo")) photos.countval photos = sqlContext.load( "jdbc", Map( "driver" -> "com.mysql.jdbc.Driver", "url" -> "jdbc:mysql://testserver:3306/test?user=tester&password=testing", "dbtable" -> "photo")) photos.count Unfortunately, this does not Continue reading →

Linux: Splitting files in two

Here are two scripts splitting the lines of a file into two files based on a given ratio. #!/bin/bash   # This script writeis the first part of the lines from the given input file into one output file and the rest of the lines into another output file. # The frist output file (with Continue reading →

Spring MVC: properties in the Application Context VS in the Servlet Context

I was deploying a web app based on Spring MVC (3.2.6.RELEASE). In this web app I was trying to use properties in the application context as well as in the servlet context. Now, I determined experimentally (I would have to check the code to be absolutely sure) that in the application context when using missing Continue reading →

MySQL: Install locally (not as root) from binaries

Under Ubuntu, I have tried to setup MySQL from binaries for a local user (not root) with another MySQL instance already running. This works, but is documented rather vaguely (I did not find anything that documented the whole process). So I will sum up the solution I have come up with here: download a MySQL Continue reading →

MySQL 5.5 vs 5.6: ERROR 1071 (42000): Specified key was too long; max key length is 767 bytes

I was building an index which was too long using MySQL 5.5. MySQL returned a WARNING 1071 (42000): Specified key was too long; max key length is 767 bytesWARNING 1071 (42000): Specified key was too long; max key length is 767 bytes The index got silently truncated. And, at least on the surface, everything worked. Continue reading →