on Wednesday, July 10, 2013
I recently bought a Synology DS213+ and I have to say it's awesome. Initially I wanted to run SABnzb and Sickbeard directly on the NAS. Unfortunately this setup has two problems. First of all with the SABnzb and Sickbeard packages installed on the Synology, the NAS never goes into hibernation (standby) anymore. There is a workaround that suggests having the log files of these two applications on a USB pen-drive but I never got this to work for me. The second problem is the transfer rate. SABnzb on the Synology NAS only gave me between 3 and 4 MB/sec while running it from a desktop computer in the same network gave me between 10 and 12 MB/sec.

Finally I decided to continue running SABnzb and Sickbeard on my desktop computer and to copy new files manually onto my Synology NAS via FTP. I run the common post-processing setup using sabToSickBeard.py.

Keeping track of new files and manually uploading them to the NAS is a bit tedious. That's why I decided to start the upload as part of the post-processing. Here is what I did. Go into your folder containing the Sickbeard post-process scripts, i.e. ~/.sickbeard/autoProcessTV, and in that folder create a new file upload.py with the content listed below.

This is essentially a copy of this script on stackoverflow with some of the extra features like encryption, sftp and walk removed. Find the line that says FTP_PWD and replace it with the password of the FTP user on the Synology NAS. Plain FTP is not very secure, so you might as well have the password in the file - and yes I don't care so much about security since everything is behind a router and a firewall anyways. Make upload.py executeable, i.e. chmod +x upload.py

Now make a backup of the original autoProcessTV.py file that comes with Sickbeard and replace it with the file I have listed below. In the new autoProcessTV.py file replace FTP_USER with the username of the FTP user on your Synology (i.e. ftpuser). Replace FTP_PATH with the shared folder where you want to upload to (i.e. video). Replace FTP_HOST with the ip address of your NAS (i.e. 192.168.0.25). The modified script works as the original. The upload it triggered whenever a line is printed that starts with “Moving”. From that line, the target directory in Sickbeard is extracted and uploaded via FTP to the NAS. Enjoy.

# Author: Nic Wolfe <nic@wolfeden.ca>
# URL: http://code.google.com/p/sickbeard/
#
# This file is part of Sick Beard.
#
# Sick Beard is free software: you can redistribute it and/or modify
# it under the terms of the GNU General Public License as published by
# the Free Software Foundation, either version 3 of the License, or
# (at your option) any later version.
#
# Sick Beard is distributed in the hope that it will be useful,
# but WITHOUT ANY WARRANTY; without even the implied warranty of
# MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE. See the
# GNU General Public License for more details.
#
# You should have received a copy of the GNU General Public License
# along with Sick Beard. If not, see <http://www.gnu.org/licenses/>.
import re
import os
import sys
import urllib
import os.path
import ConfigParser
class AuthURLOpener(urllib.FancyURLopener):
def __init__(self, user, pw):
self.username = user
self.password = pw
self.numTries = 0
urllib.FancyURLopener.__init__(self)
def prompt_user_passwd(self, host, realm):
if self.numTries == 0:
self.numTries = 1
return (self.username, self.password)
else:
return ('', '')
def openit(self, url):
self.numTries = 0
return urllib.FancyURLopener.open(self, url)
def processEpisode(dirName, nzbName=None):
config = ConfigParser.ConfigParser()
configFilename = os.path.join(os.path.dirname(sys.argv[0]), "autoProcessTV.cfg")
print "Loading config from", configFilename
if not os.path.isfile(configFilename):
print "ERROR: You need an autoProcessTV.cfg file - did you rename and edit the .sample?"
sys.exit(-1)
try:
fp = open(configFilename, "r")
config.readfp(fp)
fp.close()
except IOError, e:
print "Could not read configuration file: ", str(e)
sys.exit(1)
host = config.get("SickBeard", "host")
port = config.get("SickBeard", "port")
username = config.get("SickBeard", "username")
password = config.get("SickBeard", "password")
try:
ssl = int(config.get("SickBeard", "ssl"))
except (ConfigParser.NoOptionError, ValueError):
ssl = 0
try:
web_root = config.get("SickBeard", "web_root")
except ConfigParser.NoOptionError:
web_root = ""
params = {}
params['quiet'] = 1
params['dir'] = dirName
if nzbName != None:
params['nzbName'] = nzbName
myOpener = AuthURLOpener(username, password)
if ssl:
protocol = "https://"
else:
protocol = "http://"
url = protocol + host + ":" + port + web_root + "/home/postprocess/processEpisode?" + urllib.urlencode(params)
print "Opening URL:", url
try:
urlObj = myOpener.openit(url)
except IOError, e:
print "Unable to open URL: ", str(e)
sys.exit(1)
result = urlObj.readlines()
for line in result:
print line
if line and line.startswith("Moving"):
print "Triggering upload to Synology NAS"
pattern = r'Moving file from .+ to (.*)'
match = re.search(pattern, line)
if match:
target = match.group(1)
sep = os.sep
local_file = sep.join(target.split(sep)[:-1])
remote_location = os.path.join(sep, "FTP_PATH", sep.join(target.split(sep)[-3:-1]))
print "Sending %s to: %s " % (target, remote_location)
upload_script = configFilename = os.path.join(os.path.dirname(sys.argv[0]), "upload.py")
cmd = "python %s -l '%s' -r '%s' -u FTP_USER -s FTP_HOST" % (upload_script, local_file, remote_location)
print "Running %s" % (cmd,)
os.system(cmd)
else:
print "Unable to extract target folder using '%s'" % (pattern,)
# -*- coding: utf8 -*-
'''This tool will ftp all the files in a given directory to a given location
if the file ftpallcfg.py exists in the directory it will be loaded and the values within it used,
with the current directory used as the source directory.
ftpallcfg.py file contains the following variables.
===========================
server = <server to ftp to>
username = <Username for access to given server>
remote_dir = <remote server directory>
===========================
'''
import ftplib
import os
import getpass
import sys
import time
import socket
__revision__ = 1.11
SLEEP_SECONDS = 1
class FtpAddOns():
PATH_CACHE = []
def __init__(self, ftp_h):
self.ftp_h = ftp_h
def ftp_exists(self, path):
'''path exists check function for ftp handler'''
exists = None
if path not in self.PATH_CACHE:
try:
self.ftp_h.cwd(path)
exists = True
self.PATH_CACHE.append(path)
except ftplib.error_perm, e:
if str(e.args).count('550'):
exists = False
else:
exists = True
return exists
def ftp_mkdirs(self, path, sep='/'):
'''mkdirs function for ftp handler'''
split_path = path.split(sep)
new_dir = ''
for server_dir in split_path:
if server_dir:
new_dir += sep + server_dir
if not self.ftp_exists(new_dir):
try:
print 'Attempting to create directory (%s) ...' % (new_dir),
self.ftp_h.mkd(new_dir)
print 'Done!'
except Exception, e:
print 'ERROR -- %s' % (str(e.args))
def _get_local_files(local_dir):
'''Retrieve local files list
result_list == a list of dictionaries with path and mtime keys. ex: {'path':<filepath>,'mtime':<file last modified time>}
ignore_dirs == a list of directories to ignore, should not include the base_dir.
ignore_files == a list of files to ignore.
ignore_file_ext == a list of extentions to ignore.
'''
result_list = []
ignore_dirs = ['CVS', '.svn']
ignore_files = ['.project', '.pydevproject']
ignore_file_ext = ['.pyc']
base_dir = os.path.abspath(local_dir)
for current_dir, dirs, files in os.walk(base_dir):
for this_dir in ignore_dirs:
if this_dir in dirs:
dirs.remove(this_dir)
sub_dir = current_dir.replace(base_dir, '')
if sub_dir:
break
for this_file in files:
if this_file not in ignore_files and os.path.splitext(this_file)[-1].lower() not in ignore_file_ext:
filepath = os.path.join(current_dir, this_file)
file_monitor_dict = {
'path': filepath,
'mtime': os.path.getmtime(filepath)
}
result_list.append(file_monitor_dict)
return result_list
def upload_all(server,
username,
password,
base_local_dir,
base_remote_dir,
files_to_update=None):
'''Upload all files in a given directory to the given remote directory'''
continue_on = False
login_ok = False
server_connect_ok = False
base_local_dir = os.path.abspath(base_local_dir)
base_remote_dir = os.path.normpath(base_remote_dir)
if files_to_update:
local_files = files_to_update
else:
local_files = _get_local_files(base_local_dir)
if local_files:
ftp_h = ftplib.FTP()
try:
ftp_h.connect(server)
server_connect_ok = True
except socket.gaierror, e:
print 'ERROR -- Could not connect to (%s): %s' % (server, str(e.args))
except IOError, e:
print 'ERROR -- File not found: %s' % (str(e.args))
except socket.error, e:
print 'ERROR -- Could not connect to (%s): %s' % (server, str(e.args))
ftp_path_tools = FtpAddOns(ftp_h)
if server_connect_ok:
try:
ftp_h.login(username,password)
print 'Logged into (%s) as (%s)' % (server, username)
login_ok = True
except ftplib.error_perm, e:
print 'ERROR -- Check Username/Password: %s' % (str(e.args))
if login_ok:
for file_info in local_files:
filepath = file_info['path']
path, filename = os.path.split(filepath)
remote_sub_path = path.replace(base_local_dir, '')
remote_path = path.replace(base_local_dir, base_remote_dir)
remote_path = remote_path.replace('\\', '/') # Convert to unix style
if not ftp_path_tools.ftp_exists(remote_path):
ftp_path_tools.ftp_mkdirs(remote_path)
# Change to directory
try:
ftp_h.cwd(remote_path)
continue_on = True
except ftplib.error_perm, e:
print 'ERROR -- %s' % (str(e.args))
if continue_on:
if os.path.exists(filepath):
f_h = open(filepath,'rb')
filename = os.path.split(f_h.name)[-1]
display_filename = os.path.join(remote_sub_path, filename)
display_filename = display_filename.replace('\\', '/')
print 'Sending (%s) ...' % (display_filename),
send_cmd = 'STOR %s' % (filename)
try:
ftp_h.storbinary(send_cmd, f_h)
f_h.close()
print 'Done!'
except Exception, e:
print 'ERROR!'
print str(e.args)
print
else:
print "WARNING -- File no longer exists, (%s)!" % (filepath)
ftp_h.quit()
print 'Closing Connection'
else:
print 'ERROR -- No files found in (%s)' % (base_local_dir)
return continue_on
if __name__ == '__main__':
import optparse
default_config_file = u'ftpallcfg.py'
# Create parser, and configure command line options to parse
parser = optparse.OptionParser()
parser.add_option("-l", "--local_dir",
dest="local_dir",
help="Local Directory (Defaults to CWD)",
default='.')
parser.add_option("-r", "--remote_dir",
dest="remote_dir",
help="[REQUIRED] Target Remote directory",
default=None)
parser.add_option("-u", "--username",
dest="username",
help="[REQUIRED] username",
default=None)
parser.add_option("-s","--server",
dest="server",
help="[REQUIRED] Server Address",
default=None)
(options,args) = parser.parse_args()
if (options.username and options.server and options.remote_dir) or \
os.path.exists(default_config_file):
local_dir = options.local_dir
if os.path.exists(default_config_file):
sys.path.append('.')
import ftpallcfg
try:
server = ftpallcfg.server
username = ftpallcfg.username
remote_dir = ftpallcfg.remote_dir
except AttributeError, e:
print "ERROR --", str(e.args)
print
print 'Value(s) missing in %s file! The following values MUST be included:' % (default_config_file)
print '================================'
print 'server = <server to ftp to>'
print 'username = <Username for access to given server>'
print 'remote_dir = <remote server directory>'
print '================================'
sys.exit()
else:
server = options.server
username = options.username
remote_dir = options.remote_dir
p = "FTP_PWD"
try:
upload_all(server, username, p, local_dir, remote_dir, [])
except KeyboardInterrupt:
print 'Exiting...'
else:
print 'ERROR -- Required option not given!'
print __revision__
print __doc__
print
parser.print_help()
view raw upload.py hosted with ❤ by GitHub
on Monday, January 14, 2013
Ever wanted to know the total size of the memtables in your Cassandra cluster? Here is a little oneliner which gets you the total size in bytes.

nodetool cfstats | grep 'Memtable Data Size' | awk '{sum+=$4}END{print sum}'
on Thursday, January 10, 2013
I found it annoying that Cygwin asks for the passphrase of your private ssh key every time you are starting a ssh connection. In the regular bash shell under Ubuntu it would remember the passphrase so it only needs to be entered once. Here is a nice recipe that helps Cygwin remembering the passphrase (thanks to this blogpost). In your ~/.bash_profile add the following at the end:

if [ -f ${HOME}/.ssh-agent ]; then
. ${HOME}/.ssh-agent > /dev/null
fi
if [ -z "$SSH_AGENT_PID" -o -z "`/usr/bin/ps -a|/usr/bin/egrep \"^[ ]+$SSH_AGENT_PID\"`" ]; then
/usr/bin/ssh-agent > ${HOME}/.ssh-agent
. ${HOME}/.ssh-agent > /dev/null
fi
ssh-add ~/.ssh/id_rsa
on Tuesday, August 28, 2012
Today I had to use Redis for the first time which doesn’t seem very well supported under Windows at this point. Coming from Linux I try to use Cygwin under Windows as often as I can. So here is what I did to build the latest Redis version (there were some old binaries for download but I wanted to use a newer version). First make sure you have the Cygwin packages “make” and “gcc” installed. Then open a Cygwin terminal and follow the steps under “Installation” but do not run “make” yet. Before you run “make” open the file src/redis.c and add the following block somewhere on top of the file (I have copied it just one line before the #include statement):

#ifdef __CYGWIN__
#ifndef SA_ONSTACK
#define SA_ONSTACK 0x08000000
#endif
#endif
view raw redis.c hosted with ❤ by GitHub

This is a manual change which I found in this issue. Finally run “make”. Ignore the warnings. If all goes well, you should have a bunch of new .exe files at the end. redis-benchmark.exe, redis-check-aof.exe, redis-check-dump.exe, redis-cli.exe and redis-server.exe – copy them into the bin folder of your Cygwin installation and restart the terminal. To test execute “redis-cli -h <your-redis-server> -p <your-redis-port> ping” and hopefully get a pong back.
on Tuesday, August 7, 2012
Next week I'll be changing jobs inside EA. My time at Playfish will be over and I will be working for another studio in Stockholm. Playfish mail is running on the Google mail infrastructure. I am sure I won't be able to access my corporate mail account after I have left, so I thought it was a good idea to get a backup of all my corporate emails. One promising program is gmvault, which I found after stumbling upon an old Matt Cutts blog post about backing up Gmail on Linux yay. Matt is suggesting getmail to get the job done but the project seems to be dead since 2009.

I am running Ubuntu 10.04 Lucid Lynx. First I installed me a new virtualenv, which you should always do if you are required to install new pip packages (gmvault). Just follow these instructions to get started with virtualenv. Once you have your virtualenv activated run pip freeze, to check what packages are installed. If you created the virtualenv with the –no-site-packages option like me, there should be 2 packages distribute and wsgiref. Make sure the distribute package is at least in version 0.6.24. On my Ubuntu 10.04 I needed to upgrade.

pip install distribute==0.6.25
view raw distribute.sh hosted with ❤ by GitHub

Finally install gmvault in your virtualenv and run it.

pip install gmvault
gmvault sync reik.schatz@old-company.com
view raw install.sh hosted with ❤ by GitHub

First it will print some instructions for you. After pressing Enter a browser window is opened and you have to log into your mail account. You will be told that the program gmvault wants to access your credentials. Click accept. This will store a Gmail XOAuth token to your local disc, i.e. as /home/user/.gmvault/reik.schatz@old-company.com.oauth. This token is now used by gmvault to access your email account. Press Enter again and start the download. It took about 3 minutes for 1200 emails to be downloaded via imap. On a side note, if you want to learn more about imap and python, you should read this great book which I just finished.
on Tuesday, July 17, 2012
Done compiling a FiSH module for irssi, which runs on Debian 6.0.5 64-bit. As some of the guides out there are somewhat broken, because links have changed and so on, I decided to post a newer version here.

First verify what system you are running on.

lsb_release -a
No LSB modules are available.
Distributor ID: Debian
Description: Debian GNU/Linux 6.0.5 (squeeze)
Release: 6.0.5
Codename: squeeze
uname -mrs
Linux 2.6.32-5-amd64 x86_64
view raw verify.sh hosted with ❤ by GitHub

Alright, so we are on Debian x86_64. Lets install some prerequisites.

sudo apt-get install irssi irssi-dev libglib2.0-0 libglib2.0-dev build-essential unzip
view raw apt.sh hosted with ❤ by GitHub

This will install you irssi and some irssi sources in the same version (0.8.15 on Debian 6.0.5). Now run the following commands line by line. This is basically following this or this guide, but some of the links have changed.

wget http://alpine.nethq.org/clandmeter/src/FiSH-irssi.v1.00-RC5-source.zip
unzip FiSH-irssi.v1.00-RC5-source.zip
cd FiSH-irssi.v1.00-RC5-source
mkdir MIRACL;cd MIRACL;cp ../mir_amd64 amd64
wget http://www.certivox.com/wp-content/themes/certivox/res/miracl.zip
unzip -j -aa -L miracl.zip
bash amd64
cp miracl.a ../;cd ..
view raw miracl.sh hosted with ❤ by GitHub

Now it's time to update the Makefile. Change the first 3 lines to the following values.

glib_inc = /usr/include/glib-2.0
glib_dir = /usr/lib/glib-2.0
irssi_dir = /usr/include/irssi
view raw makefile.sh hosted with ❤ by GitHub

Make sure these directories exist. They should have been created earlier when installing the packages above. Double check that glib_inc contains glib.h and gmodule.h. Finally run:

make amd64
view raw amd64.sh hosted with ❤ by GitHub

This might give you some warnings but if you didn't get any errors, you should now have a libfish.so file. If you get error messages, start reading the make output from the top. The first time I tried, it couldn't include glib.h and gmodule.h because I had the wrong directory configured in Makefile.

To use you new fish module, copy it to the proper location.

sudo cp libfish.so /usr/lib/irssi/modules/
view raw cp.sh hosted with ❤ by GitHub

To autoload the fish module during irssi startup add a line to you startup file.

echo "load fish" >> ~/.irssi/startup
view raw autoload.sh hosted with ❤ by GitHub

Voila enjoy encrypted irc chat.
on Friday, June 1, 2012
Today I spent some time chasing ghosts and a missing textarea value in my POST data. All form elements were posted correctly but the textarea was missing. I am using the WYMeditor for the textarea to turn the element into a rich text field. The textarea was missing from the POST data because it has it's style attribute set to display:none. This is something that the class="wymeditor" attribute does to the textarea. The important bit is the submit button which also needs a class attribute. Something I had missed when copying from the demo page for the WYMeditor. After giving the submit button a class="wymupdate" everything works just fine.