General Resolution is not required

The result for the General Resolution about the init system coupling is out and the result is, not quite surprisingly, “General Resolution is not required”.

When skimming over -devel or -private from time to time, one easily gets the impression that we are all a bunch of zealots, all too eager for fighting. People argue in the worst possible ways. People make bold statements about the future of Debian if solution X is preferred over Y. People call each other names. People leave the project.

At some point you realize, we’re not all a bunch of zealots, it is usually only the same small subset of people always involved in those discussions. It’s reassuring that we still seem to have a silent majority in Debian that, without much fuss, just do what they can to make Debian better. In this sense: A General Resolution is not required.

What are the most popular .vimrc options?

Hi always wondered what the most popular options are, you usually find in .vimrc files. So I downloaded 155 .vimrc files from the net (mostly from dotfiles.org and github.com), and wrote a little script which counts the number of times an option has been set. Since most options come in normal- and shortcut form, I mapped the shortcuts to the long version whenever I recognized them.

So without further ado, here are the most popular .vimrc options (without values!). The number specifies the number of times this option has been set. The most popular option is on the bottom:

10 tselect
10 dictionary
10 runtimepath
11 mousehide
11 t_vb
11 foldlevel
11 foldopen
12 suffixes
12 matchtime
12 fileencoding
13 modelines
13 wrap
14 sidescrolloff
14 clipboard
14 lines
14 novisualbell
15 linebreak
15 cursorline
15 fileformats
15 columns
15 cindent
16 undolevels
16 shiftround
16 lazyredraw
16 completeopt
18 modeline
18 whichwrap
18 comments
18 wildignore
19 list
19 autowrite
19 foldcolumn
19 grepprg
19 titlestring
20 autoread
20 title
21 foldenable
21 cmdheight
22 pastetoggle
23 formatoptions
23 fileencodings
24 tags
24 directory
25 ttyfast
26 termencoding
26 complete
27 nohlsearch
27 noerrorbells
27 visualbell
28 shortmess
30 showmode
31 wildmode
32 t_Co
32 listchars
32 backupdir
34 hidden
34 backup
35 smarttab
35 foldmethod
36 viminfo
36 textwidth
37 scrolloff
37 nobackup
41 nowrap
44 encoding
47 guifont
51 guioptions
53 smartcase
54 wildmenu
57 smartindent
60 mouse
63 background
64 softtabstop
66 history
70 showmatch
72 ignorecase
74 showcmd
74 laststatus
79 number
82 hlsearch
91 statusline
94 expandtab
94 ruler
96 autoindent
96 backspace
99 tabstop
109 incsearch
114 shiftwidth
124 nocompatible

Out of 155 .vimrcs

Fun fact: nocompatible is the most popular, but also most useless one. The fact that you have an .vimrc automatically implies the nocompatible mode in vim.

How to get the most precise time, comparable between processes in Python?

Let’s consider the following scenario: I have two Python processes receiving the same events and I have to measure the delay between when process A received the event and when process B received it, as precisely as possible (i.e. less than 1ms).

Using Python 2.7 and a Unix system you can use the time.time method which provides the time in seconds since Epoch and has a typical resolution of a fraction of a ms on Unix. You can use it on different processes and still compare the results, since both processes receive the time since Epoch, a defined and fixed time in the past.

On Windows time.time also provides the time since Epoch, but the resolution is in the range of 10ms, which is not suitable for my application.

There is also time.clock which is super precise on Windows, and much less precise on Unix. The mayor drawback is that it returns the time since the process started or since the first call of time.clock within that processes. This means you cannot compare the results of time.clock between two processes as they are not calibrated to a common t-zero.

I had high hopes for Python 3.3 where the time module was revamped and I was reading about time.monotonic and time.perf_counter. Especially time.perf_counter looked like it would suit my needs as the documentation said it provides the highest available resolution for the system and was “system-wide”, in contrast to for example the new time.process_time which was “process_wide”. Unfortunately it turned out that time.perf_counter acts similar to time.clock on Python 2.7 as it provides you with the time since the process started or the first time the method was called within the process. The results of time.monotonic are comparable between processes, but again not precise enough on Windows.

Here is a small script which demonstrates how the times provided by time.clock and time.perf_counter are not comparable between processes. It starts two processes and lets both of them print out the output of the timer to stdout. In the output the times should be monotonically increasing. Since I let process 2 sleep for one second before calling the timer method for the first time, the output of this process is usually one second smaller when using time.clock or time.perf_counter.

#!/usr/bin/env python


from multiprocessing import Process
import time

timers = ['clock', 'time', 'monotonic', 'perf_counter']

def proc(timer):
    timer = getattr(time, timer)
    time.sleep(1)
    for i in range(3):
        print('P2 {time}'.format(time=timer()))
        time.sleep(1)

if __name__ == '__main__':
    for t in timers:
        print("Using {timer}".format(timer=t))
        p = Process(target=proc, args=(t,))
        timer = getattr(time, t)
        p.start()
        for i in range(3):
            print('P1 {time}'.format(time=timer()))
            time.sleep(1)
        p.join()

The result when running on Windows with Python 3.3:

$ python timertest.py
Using clock
P1 6.146032526480321e-06
P1 0.9926582847820045
P2 2.9612702173041547e-05
P1 1.9941743992602412
P2 1.0008579302676737
P2 2.0022709590185346
Using time
P1 1368614235.509732
P1 1368614236.511172
P2 1368614236.601301
P1 1368614237.512612
P2 1368614237.602741
P2 1368614238.604181
Using monotonic
P1 484.636
P1 485.63800000000003
P2 485.738
P1 486.639
P2 486.73900000000003
P2 487.741
Using perf_counter
P1 12.390910576623565
P1 13.39050745276285
P2 7.542858100680394e-06
P1 14.39190763071843
P2 1.0014012954160376
P2 2.0041399116368144

So as far as I see it, there is no way of getting comparable times between two processes on Windows with more precision than 10ms. Is that correct or am I missing something?

Wee! Wheezy is out (better late than never)

Last week we released Wheezy, roughly two years after our last release Squeeze.

I’d like to thank all the contributors in- and outside of Debian for your fine work! Every single contribution — no matter how big or small — summed up to the wonderful release we finished last week. Without you this release would not have been possible. Keep up the good work guys and make Jessie rock even harder!

PS: It is very nice to see once again fresh packages rolling into unstable and spending some time fixing broken dependencies :)

Synchronizing Google Mail Contacts with Thunderbird

Dear Lazyweb,

can anyone recommend a good Thunderbird extension which allows for synchronizing the address book with Google mail? So far I tried Google Contacts, but something went wrong with the syncing and some contacts where deleted on both sides. To avoid this problem, one can use Google Contacts in read-only mode (it will only fetch contacts from Google, but never write to it) but then you have to import new Thunderbird contacts to Google mail manually.

Google introduced CardDav in December 2012, which allows for syncing of contacts, but since Thunderbird’s development is apparently on hold this is probably not gonna be supported out-of-the-box. There are some other extensions for Thunderbird, but since synchronization is hard and a lot more complicated than: “replace newer version with older one” I’m looking for something mature and well tested.

Before someone suggests it, I know Evolution has this feature built-in. I gave it a try last week and found so many other grave bugs with the calendar and newsgroups that Evolution is simply unfit for my needs. I really like Thunderbird and want to stick with it for a few more years until I have to look for something else.

Yours truly,

Basti

Shiny new iPod Nano 6G… fffffffuuuuuuuuuuuu

So I got an iPod Nano (6th generation) for Christmas this year, just in time since my trusty old iPod Mini started beg for retirement after almost 8 years of usage.

Since my old iPod was working like a charm all those years I expected a smooth sailing when I plugged in my new iPod Nano. Gnome recognized it correctly and mounted the device. The iPod showed up in Rhythmbox as I was used to and I started to fill it with some music. Everything worked as expected: Rhythmbox copied the music to the iPod without complaining and after unmounting the iPod and starting it — it was emtpy. Whait, what? Why is it empty? Didn’t I just… So I tried again, and again with the same result.

Half an hour later I found out that libgpod (the iPod “driver” for Linux) supports all iPods except the iPod Nano 6G. Bummer. Apparently Apple changed the algorithm to calculate the checksums of the files on the device and since that algorithm is unknown you cannot successfully write on it with free software.

That means technically this device is unusable for Linux users since iTunes doesn’t run on Linux. However, there seems to be a way to use the iPod Nano with libgpod if you are trusting this guy and willing to use his binary (only!) file with libgpod (which I am not). And somehow the guys over at Spotify managed to get the iPod Nano running in their Linux client but they don’t provide the code either.

Being already more than two years old, I don’t have much hope that the iPod Nano will work on Linux with libgpod in the foreseeable future. On the bright side I can say the device is not totally useless as it comes with FM radio…

Introducing The Art of Asking

Since October 2011 my flatmate and I where quite busy realizing a little pet project of ours called The Art of Asking. The ultimate goal is to visualize the world’s opinion in an intuitive fashion and make it easy for everyone to play around with the data.

The idea behind The Art of Asking is that users submit interesting questions which are answered by users around the world. But instead of showing only the boring result, we want to provide interesting insights and statistics about the answers given.

For now the users can see the results of the question visualized by geographical regions. For example on the page for the question ‘How are you today?’ you can see the interactive map with the pie chart. The map shows the average/dominating answer for each continent encoded by color, and the pie chart the distribution of the different answers for that region. If you move the mouse over a continent the pie chart updates and shows the distribution of answers for that continent. You can also click to zoom into the map to see the same for countries and regions. This allows you to investigate how the answers are distributed around the world.

This is already quite nice and fancy to play around with, but of course we want much more. Right now we’re working on a feature which will users allow to combine two arbitrary questions and see how the answers are related. This doesn’t sound like much, but it is very addictive to crawl through the list of questions and find interesting correlations. Here is a little plot how it could look like (the data is from the actual data base of answers).

For each possible combination of answers it shows the percentage of people who answered in that combination. Of course those plots only make sense when enough users answered for both questions you want to compare which is right now not very often the case, so I guess we’ll roll out that feature sometime later when we have more data. But we have lots of ideas and are working on further ways to investigate the answers. Obviously one low hanging fruit for example would be the distribution of the answers over time.

We’ve been working for over a year now on this project in our spare time and we built it more or less from scratch. I wrote a small WSGI framework in Python and on top of that the WSGI application which runs the site. We use MongoDB for the storage of the data, uWSGI and nginx for the server, Jinja2 and Bootstrap for the HTML and D3.js for the visualization of the data, where Maci did a wonderful job realizing the interactive map and charts.

We’re running this site since July 2012 now and are already quite satisfied with the number of users, and the quality of the questions. But of course we could always use more (especially more answers). So if you want to try it out, go to theartofasking.com and fill out the blank spots on the map! We’re happy about every answer and question we can get and are eager to hear your suggestions.

Give Camp Berlin looking for volunteers

My friend Martijn from Talentspender is co-organizing a Give Camp in Berlin. They are looking for IT professionals and designers who want to spend one weekend of their time to support non-profit organizations to solve a specific problem at the Give Camp. It is for a good cause and there is no further commitment after the Camp. Plus you will be provided with free food and drinks. So if you are interested and happen to be in Berlin between 30. November and 02. December 2012 have a look at their website and register for the GiveCamp.

Quoting from their flyer:

A GiveCamp is a weekend-long event where technology professionals donate their time to provide custom solutions for non-profit organizations. Voluntarily, without further commitment and for a good cause: the long-term strengthening of the organizations.

How does it work?

  • Teamwork of experts during one weekend, with regular input from the NPOs
  • Clearly defined projects, to be completed at the GiveCamp
  • Young professionals mentored by experienced experts
  • Meals and drinks are provided

Who are we looking for?

  1. Software developers, database administrators, designers and entrepreneurs
  2. From students to senior experts
  3. Team players with enthusiasm for interdisciplinary projects