Sunday, January 24, 2010

Emacs and broken syntax highlighting

I've been struggling with syntax highlighting bugs in ruby-mode and javascript-mode for quite some time. Today I've tried espresso-mode that claims to be fixed javascript mode, and it fails too. I'm not sure why but it seems to be quite typical for Emacs and not typical for other advanced editors (vim & textmate at least). Maybe something is wrong with syntax highlighting facility of Emacs, or maybe major mode authors just cannot use it correctly.

One of the potential problems is regular expressions facility of Emacs Lisp. First of all it uses quite inconvenient dialect of regexps. Many constructs need to be escaped with '\' (alternative (|) and grouping, for example). Second, ELisp doesn't have special syntax for regexps. You have to use strings, where slash has to be escaped again. So even simple regexp a|b turns to "a\\|b".

Two years ago I've written small regexp DSL for Emacs Lisp. Using this DSL you can express complex regular expressions readably.

So today I found regexp literal expression in espresso.el -- "[=(,:]\\(?:\\s-\\|\n\\)*\\(/\\)\\(?:\\\\/\\|[^/*]\\)\\(?:\\\\/\\|[^/]\\)*\\(/\\)", translated it to my DSL and got:

(redsl-to-regexp `(concat (char-set "=(,:")
                          (* (or (whitespace) "\n"))
                          (cap-group /)
                          (or "\\/" (neg-char-set "/*"))
                          (* (or "\\/" (neg-char-set "/")))
                          (cap-group /)))

which is simply wrong. It obviously fails for simple regexp -- /\\/

I tranformed it into:

(redsl-to-regexp `(concat (char-set "=(,:")
                          (* (or (whitespace) "\n"))
                          (cap-group /)
                          (+ (or (neg-char-set "\\/")
                                 (concat "\\" (anychar))))
                          (cap-group /)))

which is logical and works. And ура ура I finally have working syntax highlighting in javascript mode!

I've uploaded regex-dsl to http://github.com/alk/elisp-regex-dsl and will send espresso.el fix soon.

Monday, January 11, 2010

Speeding up ruby interpreter in Debian & Ubuntu

It's an ancient trick, but I decided to blog about it while I still remember to blog about it.

Unix (or ELF, to be more precise) shared libraries suck. Well, there are many useful features that are not found on other platforms (like LD_PRELOAD), but the performance is worse than in other systems. And sometimes, dramatically.

The problem is that shared libraries are usually compiled as position independent code (PIC). This is quite slower (especially, on older ISAs, like i386, that don't have have PC-relative addressing) than normal code. Other reason for shared libraries slowness is indirection for almost all function calls. This indirection gives you extra flexibility. ELF shared libraries allow you to replace almost every function, due to this extra indirection. And sometimes this is very convenient, but this is slow.

Now imagine modern C or C++ code, with small functions that frequently call each other. Extra indirection and PIC cost in function prologues becomes quite high. GCC's -fvisibity switch allows you to get rid of some of this flexibility for gains in speed, but few libraries use it, yet.

You can run simple experiment (as I did few years ago). Write a small program that does malloc/free in a tight loop. Link it normally (i.e. with shared libc) and statically. Then compare their speed. I remember as high as 50% gain with statically sinked libc, that doesn't pay performance price for ELF shared libraries flexibility.

There's 'sort-of-hack' that trades memory efficiency for speed. It is possible to link normal (i.e. non-PIC code) as shared library (at least on i386). This way you'll have minimal function call indirection and no variable access indirection at all. You won't pay PIC price too. The only downside of this method is that dynamic linker will have to patch TEXT section pages with relocated addresses, so it won't be possible to share this pages between different processes. The performance gain may well worth it though. NVIDIA folks, for example, build their libGL in this way. And I'm sure they know what they do.

Why I'm mentioning ELF shared libraries? Because Debian (and Ubuntu too) build ruby interpreter as shared library, and according to my measurements, this gives around 10% performance penalty. To regain part of this loss, I simply edit ruby's 'configure' script and remove all mentions of -fPIC. I than 'dpkg-buildpackage' and install resultant .deb-s as usual. This brings performance of ruby interpreter back to it's normal level (i.e. when it's built statically). Another optimization that I use, is passing better GCC optimization flags. "-march=' flag is quite important on i386.

P.S. Read excellent http://people.redhat.com/drepper/dsohowto.pdf by GNU libc guru Ulrich Drepper, if you want to know more details about ELF shared libraries.

Monday, January 4, 2010

Power of modern javascript interpreters

Today my colleague  (Vlad Zarakovsky), who is interested in porting gpicker scoring to JS or finding something that's close to that, showed me some filtration code and it's performance results. He demonstrated that some (quite non-trivial it seems, from quick glimpse of the source) filtration implementation takes 33 milliseconds on 22k of filenames, which is not bad. This result was obtained on Safari, which sports state-of-the-art javascript interpreter.

Another thing that we noted is that sorting takes additional 14 milliseconds, which compared to much more sophisticated scoring is quite large. I thought that it should be due to cost of transition from native qsort code to JS code thats implements comparator function. And out of curiosity I've proposed trying pure JS sorting instead.

The results were interesting. At first it was much slower than built-in qsort, but Vlad then discovered that it's actually faster on random data (I haven't asked by how much, but that's not very important). All this led me to the following conclusions:

  • Naive qsort on real world non-random data may be 100 times slower than proper qsort implementation. Which is understandable, but this is the first time I've met such case personally.
  • I was right about pure JS qsort being able to beat C qsort, that has to use slow and barely optimize-able comparator.
  • In general, state of the art javascript interpreters are indeed approaching speeds of optimized C. If only we had Ruby interpreter of same quality :)

So when next time your JS code will depend on fast sorting, consider picking pure JS sort implementation, but pick wisely. Also don't forget that there are still some very slow JS interpreters around (yes, I'm talking about 'the-cursed-browser').

BTW, I'm sure same applies to C standard qsort. In cases where comparator is quite simple, the implementation that is able to inline comparator should be much faster. For gpicker I've stolen source of glibc's qsort implementation and made sure that it's able to inline comparator.