Monday, April 27, 2015

SUSE HackWeek12 - YaST: Replacing Travis by Jenkins

Replacing Travis by Jenkins for YaST


I decided to look at the possible Travis replacement as my Hackweek 12 project.

Currently we use both Travis and Jenkins for continuous integration in YaST projects. Unfortunately there are many disadvantages with Travis which require additional work or limit us what we can run in a Travis job. See the hackweek project for the pros and cons summary.

The biggest Travis disadvantage is that the builds are run in a Ubuntu 12.04 system which is 3 years old and it's very difficult to find a recent compiler, Ruby interpreter, libraries,... for it. 

The Jenkins advantage is that it runs on our server and we can run the latest openSUSE very easily and avoid the problem with porting YaST packages to Ubuntu and backporting the development tools.

Jenkins Plugins


I found out that there are several Jenkins plugins which could be used to replace Travis with Jenkins:

You can install the plugins in your Jenkins instance like this:
  • Login into Jenkins
  • Go to Manage Jenkins -> Manage Plugins
  • In the tab Available select plugins GitHub Plugin, GitHub pull request builder plugin and Embeddable Build Status Plugin and install them

Configuring the GitHub Plugin

  • Generate a new access token at GitHub, select public_repo and repo:status. If you want to allow automatic webhook setup select write:repo_hook (you can add/remove the permissions later).
  • Add the token to Jenkins -> Manage Jenkins -> Configure System -> GitHub Web Hook section -> OAuth token field
  • Put the same token to the "GitHub Pull Request Builder" section, "Access Token" filed.

Create a new Jenkins job for building commits (pull requests are handled separately):

  • Select Freestyle project
  • Put the Github URL (https://github.com//) to the GitHub project filed
  • In the Source Code Management section select Git and put the same URL here
  • Make Branch Specifier field empty to build all branches
  • In the Build Triggers section - check Build when a change is pushed to GitHub
  • Add Build Step -> select Set build status to "pending" on GitHub commit
  • Add post-build action - Set build status on GitHub commit
  • Configure the other parameters of the build as needed
See more details here.

Configuring the GitHub Pull Request Builder

  • Add the created GitHub token to Jenkins -> Manage Jenkins -> Configure System -> GitHub Pull Request Builder section

Create a new job for building pull requests:

  • Select Free style project
  • Put the Github URL (https://github.com//) to the GitHub project filed
  • In the Source Code Management section select Git and put the same URL here
  • In Advanced option set Name to origin and Refspec to +refs/pull/*:refs/remotes/origin/pr/*
  • Set Branch Specifier to ${sha1}
  • In the Build Triggers section - check GitHub Pull Request Builder option
  • Check Use github hooks for build triggering option
  • Set Commit Status Context to continuous-integration/jenkins-ci/pr (or something like that to have a different ID for pull requests and avoid clashing with the GitHub plugin configured in the previous step
Set Admins or white list users so you do not have to manually trigger builds for trusted developers - the plugin avoids running trusted code at your

Using Both Plugins in One Job?


I tried to use both plugins in a single job to have less jobs but it did not work form me, the pull requests were not processed at all or it did not work at all.

If you find how to solve it let me know...


Coveralls Support?


The question was whether coveralls code coverage can run also outside the Travis environment.

I found out that it is possible, you just need to set the COVERALLS_REPO_TOKEN environment variable and some other variables containing the Git branch, build number, etc... See more details here.

You need to store the token into Jenkins and set it during build.  (You can put the token directly into the .coveralls.yml file, but that's not a good idea for a public Git repository...) Fortunately there is a Jenkins plugin which helps with this, it can inject a secret in to a environment variable. The nice feature is that it can also filter the console output to make sure your password/token does not leak in the build output.


The Current State


I have created some experimental jobs to try the plugins, e.g. yast-registration-github-ci, yast-devtools-github-ci or yast-journal-github-ci.

The jobs are not ready to fully replace the Travis integration, but they work and I my plan is to continue with this project later.


TODO?


There are still many thing to solve:

  • Coveralls - pass the Git data (branch name, etc.) from Jenkis, set the  COVERALLS_REPO_TOKEN variable, make sure it's not logged in the console to avoid compromising the value
  • Find a way how to define the scripts started in Jenkins. We need to replace the .travis.yml files with something equivalent and share the common parts effectively, very likely as a shared Rake task.
  • The current Jenkins jobs use active Git polling, we should switch that to use the GitHub web hooks 
So stay tuned, I'll post updates on this topic...

Bonus Section


During the implementation I found some other interesting possibilities:


Collecting Code Metrics and Code Coverage


The RubyMetrics Jenkins plugin can be used to collect Flog code metrics and RCov code coverage statistics:



That looks nice, but the practical value is low. The code coverage can be collected by Coveralls, which can comment the changes in pull requests, and code quality can be scanned e.g. by CodeClimate which provides more details about the code and better evaluates the code quality in general. More over it offers some hint what and how to fix, not just plain numbers without any clue what's wrong with the code.

So I decided to not use them for YaST.

Writing Jenkins Plugins in Ruby


I found an interesting possibility when playing with Jenkins - it's possible to write a Jenkins plugins in Ruby! There is also an example. Maybe we could simple enhance Jenkins if needed...

Monday, February 9, 2015

Git status in bash prompt - bash-git-prompt

Do you use Git?


Have you ever committed a change into a wrong branch? Have you ever started working on a feature/bug fix without pulling from remote and thus need to rebase and resolve conflicts later? Have you ever forgotten your stashed changes?

With bash-git-prompt tool that's gone!

The tools


Recently I was looking for some tool which would help me to avoid the problems mentioned above.

At first I tried sexy-bash-prompt, but I didn't like the default scheme, it was too different to the usual openSUSE bash command prompt. I tried to customize it, although the code allows some customization I could not make it look like the openSUSE default.

Then I tried bash-git-prompt. This tool is similar to sexy-bash-prompt, but provides more details about the git status. What I really like:

  • It prints more details about the current Git status (number of commits behind/ahead of remoter, count of stashed changes,...)
  • It periodically fetches the status from remote, if there are new commits on the remote you will be noticed (the check is run every 5 minutes, but it's configurable and can be completely disabled)
  • Displays the exit status of the last command, if a command fails you'll see a red X mard and the exit status number. So a failure is more visible on the terminal.
I really recommend to give it a try if you use Git from command line, it can save you same headache...

Installation


Installation is really easy, just run

cd ~
git clone https://github.com/magicmonty/bash-git-prompt.git .bash-git-prompt

If you want to update it later just run "git pull" in that directory.


openSUSE theme


Again the default style is too different, but it turned out that the color and the style can easily changed and bash-git-prompt already provides several styles out of box.

So I decided to create a new style which would mimic the default openSUSE bash prompt as close as possible so you would not notice that you are using bash-git-prompt if you are outside a Git repository.

To use bash-git-prompt with the openSUSE theme simply add this to your ~/.bashrc file:

GIT_PROMPT_THEME=Single_line_openSUSE
source ~/.bash-git-prompt/gitprompt.sh

The next time you start a new shell session bash-git-prompt will be automatically loaded.

Except the exit status at the beginning of the line it looks like the usual openSUSE bash prompt.

Let's see how it can help with Git:





Note: If you do not like the exit status indicator at the beginning of the line then use Single_line_NoExitState_openSUSE theme instead.

Help


You can run git_prompt_help to see what the used symbols mean:

 There are also some examples displayed by git_prompt_examples command:

And if you want to change the colors used in the style you can use git_prompt_color_samples to see all colors available (note: this depends on the terminal configuration, the colors might not be the same in a different terminal):

Note for Midnight Commander Users


I noticed that bash-git-prompt does not work correctly when you use Midnight Commander ("mc"). The problem is that it overwrites the PROMPT_COMMAND shell variable (see more details at

My workaround it to define an alias in ~/.alias file:

alias gp="source ~/.bash-git-prompt/gitprompt.sh"

and run "gp" whenever you want to use the bash-git-prompt functionality in mc subshell.

(The ticket above contains a patch for mc, but you would need to recompile it from sources and maintain it yourselves, i.e. keep it up to date if a security fix is released...)

Wednesday, December 17, 2014

Cucumber Testing Framework

Cucumber is BDD (behaviour driven development) framework. In contrast to other BDD frameworks (like RSpec) the specification is written in a natural human readable language.

Rubocop-yast

I wanted to write tests for the new Rubocop-yast plugin in some nice way. I started with RSpec, but the tests looked ugly (writing multiline indented Ruby code in a string literal requires extra escaping which makes it quite hard to read...).

Then I looked at the original Zombie Killer tests. They are written in Markdown so they are better readable, you can write some additional comments to the test etc... And how to run the Markdown tests? There is a custom Markdown renderer which converts the Markdown specification into a RSpec test.

Looks nice, but having a custom renderer makes it difficult, we have to maintain it and the Markdown format in specific, there is nothing else similar to it...

Then Cucumber come to my mind! It exactly fits our need! The specification allows to write extra comments and notes, it's readable almost like our Markdown and is a standard tool. The killer feature is multiline docstring parameter. It allows to write indented Ruby code directly without any extra escaping.

You can check how the tests looks here, check the *.feature files. Here is the code which converts the specification into the testing code.

Pros & Cons

Here is the summary of pros & cons I found when starting with Cucumber:

Advantages
  • Specification in natural language, readable tests and user stories
  • Allows to use other testing framework for running the real tests (like RSpec)
Disadvantages
  • Extra code for converting the textual specification to Ruby code
  • One more layer between test description and the code (you need to make sure the code really matches the description)
  • Test descriptions should describe the high level features (usually the user interaction), they should not describe the low-level implementation details
Ideally the features should be written by managers, designers or other non-technical people. They can describe the required features without any programming skills. That's probably the most important Cucumber feature.

Suitable for Yast?

In my opinion not. Why? We are usually focused on low-level features and we would probably need too much code for converting the specifications to tests. And the tests are usually too different, we would need to write extra conversion for each test and maintain it. The overall benefit would be small in my opinion and would not be worth of doing it.

Moreover the feature descriptions we usually get are hard to convert to a testing code, they are usually too generic or cannot be tested in unit tests (e.g. the installer features).

Links


Monday, November 24, 2014

Using Rubocop

Introduction


Rubocop is a Ruby static code analyzer which looks for common code smells and checks the coding style.

Installation

The installation is quite easy, just run "sudo gem install rubocop" command (assuming you have Ruby alredy installed).

Initial Run and Creating the Config File

If there is no .rubocop.yml file in your project root then Rubocop uses the default configuration.

It is a good idea to let Rubocop generate the project default for you, simply run "rubocop --auto-gen-config". This will create .rubocop_todo.yml file which can be used as a template for your initial config file.

The default generated config file disables all checks which fail. That means if you run Rubocop with this config file (or if you remove the _todo suffix) it will report success.

Fixing the Issues

Now you can go one by one disabled check in the created template, enable each check and see where the problem is and whether it's a valid issue according to your style or preferences.

There are basically these solutions how to fix an error reported by Rubocop:

  • Fix the issue according to the suggestions reported by Rubocop
  • Let Rubocop to fix it for you (does not work for all issues found, but majority of coding style issues, like indentation or white space usage, can be fixed automatically), just add "-a" or "--auto-correct" option. You should manually check the changes done ("git diff") after auto correction, just to be sure the fix was correct and had no side effects.
  • Change the expected style (e.g. the default Rubocop style is single quoted string literals, if you prefer double quoted strings in your project then set the different default in the config), see the possible options in the default configuration.
  • Disable the check locally in the code (e.g. the rule is valid, but the specific place in the code is an exception where breaking the rule is correct, for example you prefer ".nil?" over "== nil", but in a test you want to check your operator= definition correctly handles nil comparison)
  • Disable the check globally

Using Rubocop in CI

To ensure that the coding style is honored during development it is a good idea to run Rubocop at CI (Continuous Integration) server like Travis or Jenkins.

Rubocop in Yast

I tried to start with Rubocop in the Yast registration module which is written from scratch and should not contain ugly code parts introduced by YCP to Ruby conversion.

Initially it reported almost 3000 (!) offenses, but many of them were false positives caused by different coding style defaults (e.g. single quote vs. double quote string literals). After adapting the config style (and relaxing some metric checks which would require non-trivial refactoring) the number of issues was decreased to just about 900.

Majority issues were harmless and related to white space, but some of the checks found really bad code, like this "private" modifier issue.

Thanks to the nice auto correction feature the majority of the issues (~830 which is about 92%) could be fixed automatically. So the number of the manual changes was rather small.

I found only two issues with auto correction - at one place it removed a comment which was inside a removed block (moved outside the block) and in another case it added a trailing space at the end of the line (at that time I had the trailing check disabled so it was not fixed by another check).

You can see all the changes in this pull request.

Result

It was a nice experience with Rubocop as it not only complains what is wrong but also suggests the solution how to fix the found issues. More over it has auto correct feature which works very well and can fix almost all coding styles issues automatically.

So let see if we can use it in more Yast modules...

Friday, October 25, 2013

Adding a new package to the inst-sys (openSUSE installation system) or to the rescue system

Why?


Sometimes you need to add a new RPM to the openSUSE installation system (called inst-sys) or to the rescue image. Especially in Yast development we usually need to include new tools or some new subsystems (like the recent Yast switch to Ruby required adding Ruby interpreter into the inst-sys).

How?


The overall procedure is quite simple, basically you need to modify installation-images package and then remaster the installation medium (or update the boot server, depending how you boot the system). But there are some tricky parts...

Installation-images


This RPM builds file system images (using compressed squashfs to save valuable space) with installation system and the rescue system.

If you want to add a new package then follow these steps:
  1. Checkout the installation-images package for your target distribution from OBS, e.g.
    osc co openSUSE:13.1 installation-images
  2. Add your new package to the BuildRequires list in the installation-images.spec file
  3. Modify the package list for the target file system image, which is stored in *.file_list file. You can find a full example here, there is a diff for adding libyui-qt-graph package to the inst-sys. You can include a complete package or only explicitly listed files (this is usefull if the package is huge and you need just a small file from it).
    Note: If you want to
    later update the installation-images package used as the base then it is better to use a patch instead of directly modifying the file list. It can be later easily applied to an updated version.
  4. Build the package locally using 
    osc build --userootforbuild
    command.
    This will take some time, it needs a lot of packages and building the target file system images is also not trivial.
    The build requires root user, without the extra option it would fail. If you really need to build the package in the OBS server automatically then you need to ask the OBS maintainers for adding an exception for your package.
    (In YaST:Head:installer/installation-images we have such an exception.)

Updating the medium


Updating the boot medium is quite tricky, you need to unpack the /CD1 and /SuSE/openSUSE/CD1 directories from the built installation-images RPM package and overwrite the original files and create the ISO image again.

But I have actually never tried that, in OBS we simply build our own ISO image using a kiwi project.

Wednesday, October 16, 2013

"New Installer" Details Have Been Published

The details, mentioned in the previous, blog post have been published in the New Installer GitHub wiki page in yast-installation repository.

If you have any questions or comments then ask at the yast-devel mailing list.

Monday, September 30, 2013

Yast "New Installer" Development Started

Yast "New Installer"


The Yast team started development of "new Yast installer". It actually won't be a complete rewrite of the Yast installer, it will be rather a refreshing, refactoring and adding some enterprise features (for SLE12 - SUSE Linux Enterprise 12). But of course, these enhancements will be also present in the next openSUSE release (openSUSE-13.2, 13.1 is almost done).

We just started public discussion about it, if you have some ideas about the Yast installer just join the discussion at yast-devel mailing list.

See the announcement, more details will be published later, stay tuned...

Tuesday, July 31, 2012

OpenSUSE Hackweek VIII - New WebYaST home page


I noticed in the previous post that we want to have new cool home pages for WebYaST.

We started at our Appliance Workshop and last week I continued as a Hackweek VIII project and the new page is now available here:



If you have any comments just post them here! Thank you for your feedback!

OpenSUSE Hackweek VIII - New WebYaST Demo Appliance

I decided to work on WebYaST home page and finish the WebYaST demo appliance.

Some time ago we had a workshop where we decided to create cool web pages for our projects. We started with WebYaST. The goal is to create a nice looking web presentation.

And we want to have some WebYaST demo so users could easily try it without any setup or installation. So we decided to create a WebYaST demo appliance in SUSE Studio which can be used as a LiveCD or USB stick or even directly in Studio testdrive in a web browser (no need to download anything!).


Including WebYaST in an openSUSE-12.1 SUSE Studio Appliance


This is really easy as Studio has WebYaST support built-in, just go into Configuration -> Appliance tab and check Enable WebYaST check box at the very bottom of the page. And that's it!

Studio will add all needed WebYaST packages, opens port at firewall (54984) and autostarts WebYaST at boot.

But if we want to have a really nice demo we still need to do some improvements...


Appliance Fine-tuning


Originally I started with KDE desktop but WebYaST basically does not depend on any desktop environment so I switched to LXDE which takes less space and should run faster than KDE (especially on slower machines).

SUSE Studio supports autologin configuration (Configuration -> Desktop) so users do not have to enter any password to start graphical session, that's nice.

Another nice feature is automatic application start,  so we can easily start Firefox. The only problem was that in testdrive WebYaST was sometimes started later than Firefox which obviously displayed error page. This is solved by 5 second delay before starting Firefox.



Importing WebYaST Certificate into Firefox


This was the hard part and it's quite tricky. When you first time connect to a running WebYaST instance in Firefox you'll see a certificate warning. The problem is that WebYaST generates a self-signed certificate (when there is no existing certificate yet) which is not trusted so Firefox displays that warning. And this might be scary for beginner users, we want our users to try WebYaST without any doubts...

Then I found certutil tool available in mozilla-nss-tools package in openSUSE. This can be used to import a certificate to Firefox from command line. So for WebYaST this means running this command for the default user:

# certutil -A -n "Webyast certificate" -t "C,," -d /home/tux/.mozilla/firefox/*.default \
-i /etc/lighttpd/certs/webyast.pem

(See certutil --help for more details.)

The only problem is that Firefox uses profiles (named configurations) stored at random generated directory which is created at first start. And that directory must exist before executing the certutil command.

This is solved by overlay files in Studio, there is a prepared directory with the default Firefox profile.

Server Name in the Generated certificate

During tests I found out that there is a problem with certificate server name and the URL. Firefox displayed a certificate error when opening https://localhost:54984 with message saying that the certificate is valid for linux-foobar server only (with foobar replaced by some random characters).

The problem is that the random hostname is automatically generated and we cannot easily change that in the URL for Firefox.

The trick is to use IP address directly instead of a host name. The WebYaST certificate in the demo appliance is generated for host 127.0.0.1 and URL for Firefox is set to https://127.0.0.1:54984.

See yastwc overlay file for all certificate related changes.


The Final WebYaST Demo Appliance


The final WebYaST demo appliance is available in SUSE Studio Gallery. You can easily try WebYaST as a LiveCD/LiveUSB stick or directly in your browser in Studio testdrive.

And how to use it? That's really simple! Just boot the image, wait until Firefox with WebYaST login dialog opens. Then use root user name with linux password to log into WebYaST.

Have a lot of fun!

Tuesday, January 24, 2012

Switching from Gettext to FastGettext in a Rails3 app

From Gettext to FastGettext

In SLMS we use Gettext for i18n support. Unfortunately it doesn't work with new Rails 3. But we found out that there is FastGettext Ruby gem which does work with Rails 3 and we decided to switch to this different implementation.

In this blogpost I'll describe the needed steps when switching from Gettext to FastGettext. And here also are solutions for some problems we found during the transition.

Using the new Ruby gems

You will need these new Ruby gems:
The first step is to remove the old Gettext gems and replace them by FastGettext gems.
So replace these gems in your Gemfile:

gem 'locale'
gem 'locale_rails'
gem 'gettext'
gem 'gettext_activerecord'
gem 'gettext_rails'
by:
gem 'fast_gettext'

# 0.4.3 contains fixes in
#'rake gettext:store_model_attributes' task
gem 'gettext_i18n_rails', '>= 0.4.3'

# rails-i18n provides translations for ActiveRecord
# validation error messages
gem 'rails-i18n'

# needed to collect translatable strings
# not needed at production
group :development do
  # needed for HAML support (optional)
  gem 'ruby_parser'

  # no need to load the gem via require
  # we only need the rake tasks
  gem 'gettext', '>= 1.9.3', :require => false
end

Then you need to initialize FastGettext, create config/initializers/fast_gettext.rb file:
# define your text domain
FastGettext.add_text_domain 'foo', :path => File.join(File.dirname(__FILE__), '..', '..', 'locale')


# set the default textdomain
FastGettext.default_text_domain = 'foo'

# set available locales
# (note: the first one is used as a fallback if you try to set an unavailable locale)
FastGettext.default_available_locales = ["en_US","ar","cs","de","es",...]
Replace foo with your textdomain. Now you need to add FastGettext initialization in your application controller:
class ApplicationController < ActionController::Base
  # replace these old Gettext calls:
  #   init_gettext "your_domain"
  #   GetText.textdomain("your_domain")
  # by this:
  include FastGettext::Translation

  before_filter :set_users_locale 

  def set_users_locale
    I18n.locale = FastGettext.set_locale(params[:locale] || cookies[:locale] ||
      request.env['HTTP_ACCEPT_LANGUAGE'] || 'en_US')
    cookies[:locale] = I18n.locale if cookies[:locale] != I18n.locale.to_s
  end
end 
The set_users_locale before filter handles setting the correct locale for every request. The locale is set via a cookie and can be changed using ?locale=locale URL option. It is possible to use different solution for switching the locale, e.g. as path prefix aor domain name - see the Rails guide

Note: The application needs to be restarted after any change in the translations.

Solved Problems

Automatic detection of available locales

Using fixed list in the available locales list might not be nice, especially if you want to dynamically add new translations later. In this case you need to find the available locales dynamically at start. The solution si to put this code to config/initializers/fast_gettext.rb file:
# put 'en_US' as first, the first item is used as a fallback
# when requested locale (via ?locale= URL parameter) is not found
FastGettext.default_available_locales = ["en_US"]

# get available locales automatically
Dir[File.join(File.dirname(__FILE__), '..', '..', 'locale', "/*/LC_MESSAGES/*.mo")].each do |l|
  if l.match(/\/([^\/]+)\/LC_MESSAGES\/.*\.mo$/) && !FastGettext.default_available_locales.include?($1)
    FastGettext.default_available_locales << $1
  end
end

Language and Country Separator in locale name

Rails native localization support uses I18n module for translation support. The problem is that it uses dash (-) separator between langugage and country code in locale names.

This makes a problem when using with standard gettext locale schema which uses underscore (_) as the separator. For example translations from rails-i18n gem will not be found when the current locale in en_US, it expects en-US locale.

The problem can be solved by defining locale fallbacks like this (put this to config/initializers/fast_gettext.rb file):
# enable fallback handling
I18n::Backend::Simple.include(I18n::Backend::Fallbacks)

# set some locale fallbacks needed for ActiveRecord translations
# located in rails_i18n gem (e.g. there is en-US.yml translation)
I18n.fallbacks[:"en_US"] = [:"en-US", :en]
I18n.fallbacks[:"en_GB"] = [:"en-GB", :en]
I18n.fallbacks[:"pt_BR"] = [:"pt-BR", :pt]
I18n.fallbacks[:"zh_CN"] = [:"zh-CN"]
I18n.fallbacks[:"zh_TW"] = [:"zh-TW"]
I18n.fallbacks[:"sv"] = [:"sv-SE"]
This means that if for example a translation for en_US locale is not found then en-US will be tried and then en locale.

Including source file name and line number is the final POT file

By default when you run 'rake gettext:find' task to collect the translatable string the output will not contain the source file name and the line number. It's very useful if you get a feedback from translator (like a typo in the original message) then you don't have to scan all file but you immediately know where to fix the problem.

If you want to change this behavior and include the line numbers add this configuration to config/initializers/fast_gettext.rb file:
# configure default msgmerge parameters (the default contains "--no-location" option
# which removes code lines from the final POT file)
Rails.application.config.gettext_i18n_rails.msgmerge = ["--sort-output", "--no-wrap"]

Sorting messsages in the final POT file

The 'rake gettext:find' task sorts the messages in the final POT file alphabetically. The advantage is that if you add a new string and regenerate the file then the files will be similar and the diff will be small.

The problem is that the sorting is done at the merge step, when merging the new found translation wit the old ones. At the very first run (when the final POT file does not exist yet) the merge step is skipped and thus the messages are not sorted. This can be fixed by starting the task once more (the second run will find existing messages and do the merge with sorting).

But the problem is that you can easily forget to run the task for the second run. The workaround is to create an empty target POT file when the it doesn't exist yet. Unfortunately simple touch command is not sufficient (msgmerge failed for me with some strange UTF-8 error), we have to create valid POT but without any messages.

The workaround it to put this code to lib/tasks/gettext.rake file:
# 'gettext:find' sorts the messages alphabetically only when it is merging existing messages
# copying empty pot file from the template forces sorting even at the first run
namespace :gettext do
  task :create_pot_template do
    FileUtils.cp("locale/template.pot", "locale/textdomain.pot") unless File.exists?("locale/textdomain.pot")
  end
end

# add task dependency
task :'gettext:find' => :'gettext:create_pot_template'
The locale/textdomain.pot template should look like this:
# SOME DESCRIPTIVE TITLE.
# Copyright (C) YEAR THE PACKAGE'S COPYRIGHT HOLDER
# This file is distributed under the same license as the PACKAGE package.
# FIRST AUTHOR <email@address>, YEAR.
#
#, fuzzy
msgid ""
msgstr ""
"Project-Id-Version: version 0.0.1\n"
"POT-Creation-Date: 2012-01-16 17:56+0100\n"
"PO-Revision-Date: 2012-01-16 17:56+0100\n"
"Last-Translator: FULL NAME <email@address>\n"
"Language-Team: LANGUAGE <ll@li.org>\n"
"MIME-Version: 1.0\n"
"Content-Type: text/plain; charset=UTF-8\n"
"Content-Transfer-Encoding: 8bit\n"
"Plural-Forms: nplurals=INTEGER; plural=EXPRESSION;\n"

Automatic translation in HAML files

It is possible to extend HAML parser to automatically translate all plain text strings. The advantage is than you don't have to explicitly use _() function and you cannot forget to mark a text to translation.

This can be done using this code snippet. Save it to a file, remove the require calls at the beginning (they are obsoleted and do not work with new gettext) and require it in your ApplicationController.

Then you need to add support to 'rake gettext:find' task. Save this code snippet to lib/haml_parser.rb file. You need to replace require 'gettext/parser/ruby' by require 'gettext/tools/parser/ruby' so it works with newer gettext gem.

Then put this to lib/tasks/gettext.rake file:
# extend the HAML parser to extract plain text messages
# to support automatic translations (without need to mark the text with _())
namespace :gettext do
  task :haml_parser do
    require 'haml_parser'
  end
end

# extend the HAML parser before collecting the translatable texts
task :'gettext:find' => :'gettext:haml_parser'