Articles by Alexander Todorov

Using OpenShift as Amazon CloudFront Origin Server

It's been several months after the start of Difio and I started migrating various parts of the platform to CDN. The first to go are static files like CSS, JavaScript, images and such. In this article I will show you how to get started with Amazon CloudFront and OpenShift. It is very easy once you understand how it works.

Why CloudFront and OpenShift

Amazon CloudFront is cheap and easy to setup with virtually no maintenance. The most important feature is that it can fetch content from any public website. Integrating it together with OpenShift gives some nice benefits:

  • All static assets are managed with Git and stored in the same place where the application code and HTML is - easy to develop and deploy;
  • No need for external service to host the static files;
  • CloudFront will be serving the files so network load on OpenShift is minimal;
  • Easy to manage versioned URLs because HTML and static assets are in the same repo - more on this later;

Object expiration

CloudFront will cache your objects for a certain period and then expire them. Frequently used objects are expired less often. Depending on the content you may want to update the cache more or less frequently. In my case CSS and JavaScript files change rarely so I wanted to tell CloudFront to not expire the files quickly. I did this by telling Apache to send a custom value for the Expires header.

    $ curl http://d71ktrt2emu2j.cloudfront.net/static/v1/css/style.css -D headers.txt
    $ cat headers.txt 
    HTTP/1.0 200 OK
    Date: Mon, 16 Apr 2012 19:02:16 GMT
    Server: Apache/2.2.15 (Red Hat)
    Last-Modified: Mon, 16 Apr 2012 19:00:33 GMT
    ETag: "120577-1b2d-4bdd06fc6f640"
    Accept-Ranges: bytes
    Content-Length: 6957
    Cache-Control: max-age=31536000
    Expires: Tue, 16 Apr 2013 19:02:16 GMT
    Content-Type: text/css
    Strict-Transport-Security: max-age=15768000, includeSubDomains
    Age: 73090
    X-Cache: Hit from cloudfront
    X-Amz-Cf-Id: X558vcEOsQkVQn5V9fbrWNTdo543v8VStxdb7LXIcUWAIbLKuIvp-w==,e8Dipk5FSNej3e0Y7c5ro-9mmn7OK8kWfbaRGwi1ww8ihwVzSab24A==
    Via: 1.0 d6343f267c91f2f0e78ef0a7d0b7921d.cloudfront.net (CloudFront)
    Connection: close

All headers before Strict-Transport-Security come from the origin server.

Versioning

Sometimes however you need to update the files and force CloudFront to update the content. The recommended way to do this is to use URL versioning and update the path to the files which changed. This will force CloudFront to cache and serve the content under the new path while keeping the old content available until it expires. This way your visitors will not be viewing your site with the new CSS and old JavaScript.

There are many ways to do this and there are some nice frameworks as well. For Python there is webassets. I don't have many static files so I opted for no additional dependencies. Instead I will be updating the versions by hand.

What comes to mind is using mod_rewrite to redirect the versioned URLs back to non versioned ones. However there's a catch. If you do this CloudFront will cache the redirect itself, not the content. The next time visitors hit CloudFront they will receive the cached redirect and follow it back to your origin server, which is defeating the purpose of having CDN.

To do it properly you have to rewrite the URLs but still return a 200 response code and the content which needs to be cached. This is done with mod_proxy:

    RewriteEngine on
    RewriteRule ^VERSION-(\d+)/(.*)$ http://%{ENV:OPENSHIFT_INTERNAL_IP}:%{ENV:OPENSHIFT_INTERNAL_PORT}/static/$2 [P,L]

This .htaccess trick doesn't work on OpenShift though. mod_proxy is not enabled at the moment. See bug 812389 for more info.

Luckily I was able to use symlinks to point to the content. Here's how it looks:

    $ pwd
    /home/atodorov/difio/wsgi/static
    
    $ cat .htaccess
    ExpiresActive On
    ExpiresDefault "access plus 1 year"
    
    $ ls -l
    drwxrwxr-x. 6 atodorov atodorov 4096 16 Apr 21,31 o
    lrwxrwxrwx. 1 atodorov atodorov    1 16 Apr 21,47 v1 -> o
    
    settings.py:
    STATIC_URL = '//d71ktrt2emu2j.cloudfront.net/static/v1/'
    
    HTML template:
    <link type="text/css" rel="stylesheet" media="screen" href="{{ STATIC_URL }}css/style.css" />

How to implement it

First you need to split all CSS and JavaScript from your HTML if you haven't done so already.

Then place everything under your git repo so that OpenShift will serve the files. For Python applications place the files under wsgi/static/ directory in your git repo.

Point all of your HTML templates to the static location on OpenShift and test if everything works as expected. This is best done if you're using some sort of template language and store the location in a single variable which you can change later. Difio uses Django and the STATIC_URL variable of course.

Create your CloudFront distribution - don't use Amazon S3, instead configure a custom origin server. Write down your CloudFront URL. It will be something like 1234xyz.cludfront.net.

Every time a request hits CloudFront it will check if the object is present in the cache. If not present CloudFront will fetch the object from the origin server and populate the cache. Then the object is sent to the user.

Update your templates to point to the new cloudfront.net URL and redeploy your website!

There are comments.

OpenShift Cron Takes Over Celerybeat

Celery is an asynchronous task queue/job queue based on distributed message passing. You can define tasks as Python functions, execute them in the background and in a periodic fashion. Difio uses Celery for virtually everything. Some of the tasks are scheduled after some event takes place (like user pressed a button) or scheduled periodically.

Celery provides several components of which celerybeat is the periodic task scheduler. When combined with Django it gives you a very nice admin interface which allows periodic tasks to be added to the scheduler.

Why change

Difio has relied on celerybeat for a couple of months. Back then, when Difio launched, there was no cron support for OpenShift so running celerybeat sounded reasonable. It used to run on a dedicated virtual server and for most of the time that was fine.

There were a number of issues which Difio faced during its first months:

  • celerybeat would sometime die due to no free memory on the virtual instance. When that happened no new tasks were scheduled and data was left unprocessed. Let alone that higher memory instance and the processing power which comes with it cost extra money.

  • Difio is split into several components which need to have the same code base locally - the most important are database settings and the periodic tasks code. At least in one occasion celerybeat failed to start because of a buggy task code. The offending code was fixed in the application server on OpenShift but not properly synced to the celerybeat instance. Keeping code in sync is a priority for distributed projects which rely on Celery.

  • Celery and django-celery seem to be updated quite often. This poses a significant risk of ending up with different versions on the scheduler, worker nodes and the app server. This will bring the whole application to a halt if at some point a backward incompatible change is introduced and not properly tested and updated. Keeping infrastructure components in sync can be a big challenge and I try to minimize this effort as much as possible.

  • Having to navigate to the admin pages every time I add a new task or want to change the execution frequency doesn't feel very natural for a console user like myself and IMHO is less productive. For the record I primarily use mcedit. I wanted to have something more close to the write, commit and push work-flow.

The take over

It's been some time since OpenShift introduced the cron cartridge and I decided to give it a try.

The first thing I did is to write a simple script which can execute any task from the difio.tasks module by piping it to the Django shell (a Python shell actually).

run_celery_task
#!/bin/bash
#
# Copyright (c) 2012, Alexander Todorov <atodorov@nospam.otb.bg>
#
# This script is symlinked to from the hourly/minutely, etc. directories
#
# SYNOPSIS
#
# ./run_celery_task cron_search_dates
#
# OR
#
# ln -s run_celery_task cron_search_dates
# ./cron_search_dates
#

TASK_NAME=$1
[ -z "$TASK_NAME" ] && TASK_NAME=$(basename $0)

if [ -n "$OPENSHIFT_APP_DIR" ]; then
    source $OPENSHIFT_APP_DIR/virtenv/bin/activate
    export PYTHON_EGG_CACHE=$OPENSHIFT_DATA_DIR/.python-eggs
    REPO_DIR=$OPENSHIFT_REPO_DIR
else
    REPO_DIR=$(dirname $0)"/../../.."
fi

echo "import difio.tasks; difio.tasks.$TASK_NAME.delay()" | $REPO_DIR/wsgi/difio/manage.py shell

This is a multicall script which allows symlinks with different names to point to it. Thus to add a new task to cron I just need to make a symlink to the script from one of the hourly/, minutely/, daily/, etc. directories under cron/.

The script accepts a parameter as well which allows me to execute it locally for debugging purposes or to schedule some tasks out of band. This is how it looks like on the file system:

$ ls -l .openshift/cron/hourly/
some_task_name -> ../tasks/run_celery_task
another_task -> ../tasks/run_celery_task

After having done these preparations I only had to embed the cron cartridge and git push to OpenShift:

rhc-ctl-app -a difio -e add-cron-1.4 && git push

What's next

At present OpenShift can schedule your jobs every minute, hour, day, week or month and does so using the run-parts script. You can't schedule a script to execute at 4:30 every Monday or every 45 minutes for example. See rhbz #803485 if you want to follow the progress. Luckily Difio doesn't use this sort of job scheduling for the moment.

Difio is scheduling periodic tasks from OpenShift cron for a few days already. It seems to work reliably and with no issues. One less component to maintain and worry about. More time to write code.

There are comments.

Tip: How to Get to the OpenShift Shell

I wanted to examine the Perl environment on OpenShift and got tired of making snapshots, unzipping the archive and poking through the files. I wanted a shell. Here's how to get one.

  1. Get the application info first

    $ rhc-domain-info 
    Password: 
    Application Info
    ================
    myapp
        Framework: perl-5.10
         Creation: 2012-03-08T13:34:46-04:00
             UUID: 8946b976ad284cf5b2401caf736186bd
          Git URL: ssh://8946b976ad284cf5b2401caf736186bd@myapp-mydomain.rhcloud.com/~/git/myapp.git/
       Public URL: http://myapp-mydomain.rhcloud.com/
    
     Embedded: 
          None
    
  2. The Git URL has your username and host

  3. Now just ssh into the application

    $ ssh 8946b976ad284cf5b2401caf736186bd@myapp-mydomain.rhcloud.com
    
        Welcome to OpenShift shell
    
        This shell will assist you in managing OpenShift applications.
    
        !!! IMPORTANT !!! IMPORTANT !!! IMPORTANT !!!
        Shell access is quite powerful and it is possible for you to
        accidentally damage your application.  Proceed with care!
        If worse comes to worst, destroy your application with 'rhc app destroy'
        and recreate it
        !!! IMPORTANT !!! IMPORTANT !!! IMPORTANT !!!
    
        Type "help" for more info.
    
    [myapp-mydomain.rhcloud.com ~]\>
    

Voila!

There are comments.

How to Update Dependencies on OpenShift

If you are already running some cool application on OpenShift it could be the case that you have to update some of the packages installed as dependencies. Here is an example for an application using the python-2.6 cartridge.

Pull latest upstream packages

The most simple method is to update everything to the latest upstream versions.

  1. Backup! Backup! Backup!

    rhc-snapshot -a mycoolapp
    mv mycoolapp.tar.gz mycoolapp-backup-before-update.tar.gz
    
  2. If you haven't specified any particular version in setup.py it will look like this:

    ...
    install_requires=[
                    'difio-openshift-python',
                    'MySQL-python',
                    'Markdown',
                   ],
    ...
    
  3. To update simply push to OpenShift instructing it to rebuild your virtualenv:

    cd mycoolapp/
    touch .openshift/markers/force_clean_build
    git add .openshift/markers/force_clean_build
    git commit -m "update to latest upstream"
    git push
    

Voila! The environment hosting your application is rebuilt from scratch.

Keeping some packages unchanged

Suppose that before the update you have Markdown-2.0.1 and you want to keep it! This is easily solved by adding versioned dependency to setup.py

-       'Markdown',
+       'Markdown==2.0.1',

If you do that OpenShift will install the same Markdown version when rebuilding your application. Everything else will use the latest available versions.

Note: after the update it's recommended that you remove the .openshift/markers/force_clean_build file. This will speed up the push/build process and will not surprise you with unwanted changes.

Update only selected packages

Unless your application is really simple or you have tested the updates, I suspect that you want to update only selected packages. This can be done without rebuilding the whole virtualenv. Use versioned dependencies in setup.py :

-       'Markdown==2.0.1',
-       'django-countries',
+       'Markdown>=2.1',
+       'django-countries>=1.1.2',

No need for force_clean_build this time. Just

    git commit && git push

At the time of writing my application was using Markdown-2.0.1 and django-countries-1.0.5. Then it updated to Markdown-2.1.1 and django-countires-1.1.2 which also happened to be the latest versions.

Note: this will not work without force_clean_build

-       'django-countries==1.0.5',
+       'django-countries',

Warning

OpenShift uses a local mirror of Python Package Index. It seems to be updated every 24 hours or so. Have this in mind if you want to update to a package that was just released. It will not work! See How to Deploy Python Hotfix on OpenShift if you wish to work around this limitation.

There are comments.

Spinning-up a Development Instance on OpenShift

Difio is hosted on OpenShift. During development I often need to spin-up another copy of Difio to use for testing and development. With OpenShift this is easy and fast. Here's how:

  1. Create another application on OpenShift. This will be your development instance.

    rhc-create-app -a myappdevel -t python-2.6
    
  2. Find out the git URL for the production application:

    $ rhc-user-info
    Application Info
    ================
    myapp
        Framework: python-2.6
         Creation: 2012-02-10T12:39:53-05:00
             UUID: 723f0331e17041e8b34228f87a6cf1f5
          Git URL: ssh://723f0331e17041e8b34228f87a6cf1f5@myapp-mydomain.rhcloud.com/~/git/myapp.git/
       Public URL: http://myapp-mydomain.rhcloud.com/
    
  3. Push the current code base from the production instance to devel instance:

    cd myappdevel
    git remote add production -m master ssh://723f0331e17041e8b34228f87a6cf1f5@myapp-mydomain.rhcloud.com/~/git/myapp.git/
    git pull -s recursive -X theirs production master
    git push
    
  4. Now your myappdevel is the same as your production instance. You will probably want to modify your database connection settings at this point and start adding new features.

There are comments.

Protected RPM repositories with yum and SSL

In this article I'm going to describe a simple way to set-up RPM repositories with access control using only standard tools such as yum, SSL and Apache. I've been talking about this at one of the monthly conferences of Linux for Bulgarians!

Objective:
Create RPM repository with access control. Access is allowed only for some systems and forbidden for the rest. This is a similar to what Red Hat Network does.

Solution:
We're going to use yum and Apache capabilities to work with SSL certificates. The client side (yum) will identify itself using SSL certificate and the server (Apache) will use this information to control the access.

Client side set-up:

  1. Yum version 3.2.27 or newer supports SSL certificates for client authentication. This version is available in Red Hat Enterprise Linux 6.
  2. First you need to generate a private key and certificate using OpenSSL:

    # openssl genrsa -out /var/lib/yum/client.key 1024
    Generating RSA private key, 1024 bit long modulus
    ....++++++
    .......++++++
    e is 65537 (0x10001)
    
    # openssl req -new -x509 -text -key /var/lib/yum/client.key -out /var/lib/yum/client.cert
    You are about to be asked to enter information that will be incorporated
    into your certificate request.
    What you are about to enter is what is called a Distinguished Name or a DN.
    There are quite a few fields but you can leave some blank
    For some fields there will be a default value,
    If you enter '.', the field will be left blank.
    -----
    Country Name (2 letter code) [XX]:BG
    State or Province Name (full name) []:Sofia
    Locality Name (eg, city) [Default City]:Sofia
    Organization Name (eg, company) [Default Company Ltd]:Open Technologies Bulgaria
    Organizational Unit Name (eg, section) []:IT
    Common Name (eg, your name or your server's hostname) []:
    Email Address []:no-spam@otb.bg
    
  3. For better security you can change file permissions of client.key:

    # chmod 600 /var/lib/yum/client.key
    
  4. You need to define the protected repository in a .repo file. It needs to look something like this:

    # cat /etc/yum.repos.d/protected.repo
    [protected]
    name=SSL protected repository
    baseurl=https://repos.example.com/protected
    enabled=1
    gpgcheck=1
    gpgkey=https://repos.example.com/RPM-GPG-KEY
    
    sslverify=1
    sslclientcert=/var/lib/yum/client.cert
    sslclientkey=/var/lib/yum/client.key
    
  5. If you use self-signed server certificate you can specify sslverify=0, but this is not recommended.

Whenever yum tries to reach the URL of the repository it will identify itself using the specified certificate.

Server side set-up:

  1. Install and configure the mod_ssl module for Apache.
  2. Create a directory for the repository which will be available over HTTPS.
  3. In the repository directory add .htaccess, which looks something like this:

    Action rpm-protected /cgi-bin/rpm.cgi
    AddHandler rpm-protected .rpm .drpm
    SSLVerifyClient optional_no_ca
    
  4. The Action and AddHandler directives instruct Apache to run the rpm.cgi CGI script every time someone tries to access files with extension .rpm and .drpm.
  5. The SSLVerifyClient directive tells Apache that the http client may present a valid certificate but it has not to be (successfully) verifyable. For more information on this configuration please see http://www.modssl.org/docs/2.1/ssl_reference.html#ToC13.
  6. The simplest form of rpm.cgi script may look like this:

     1
     2
     3
     4
     5
     6
     7
     8
     9
    10
    11
    12
    #!/bin/bash
    
    if [ "$SSL_CLIENT_M_SERIAL" == "9F938211B53B4F44" ]; then
        echo "Content-type: application/x-rpm"
        echo "Content-length: $(stat --printf='%s' $PATH_TRANSLATED)"
        echo
    
        cat $PATH_TRANSLATED
    else
        echo "Status: 403"
        echo
    fi
    
  7. The script will allow access to a client which uses a certificate with serial number 9F938211B53B4F44. Other clients will be denied access and the server will return standard 403 error code.

In practice:
The above set-up is very basic and only demonstrates the technology behind this. In a real world configuration you will need some more tools to make this really usable.

My company Open Technologies Bulgaria, Ltd. has developed a custom solution for our customers based on the above example called Voyager. It features a Drupal module, a CGI script and a client side yum plugin.

The Drupal module acts as web interface to the system and allows some basic tasks. Administrators can define software channels and subscription expiration. Customers can register and entitle their systems to particular channels. The functionality is similar to Red Hat Network but without all the extra features which we don't need.

The CGI script acts as a glue between the client side and the Drupal backend. It will read information about client credentials and act as first line of defence against non-authorized access. Then it will communicate with the Drupal database and get more information about this customer. If everything is OK then access will be allowed.

The yum plugin has the task to communicate with the Drupal backend and dynamically update repository definitions based on available subscriptions. Then it will send a request for the RPM file back to the Apache server where the CGI script will handle it.

The client side also features a tool to generate the client certificate and register the system to the server.

All communications are entirely over HTTPS.

This custom solution has the advantage that it is simple and easy to maintain as well as easy to use. It integrates well with other plugins (e.g. yum-presto for delta rpm support and yum-rhnplugin) and can be used via yum or PackageKit which are the standard package management tools on Red Hat Enterprise Linux 6.

There are comments.

USB multi-seat on Red Hat Enterprise Linux 6

Multiseat configurations are well known in the Linux community and have been used for a number of years now. In the last few years USB docking stations emerged on the market and are becoming popular among multiseat enthusiasts.

My company Open Technologies Bulgaria, Ltd. offers full support of USB multiseat for Red Hat Enterprise Linux 6 as a downstream vendor. We use the name SUMU (simple usb multi user) to refer to the entire multiseat bundle and in this article I'm going to describe the current state of technologies surrounding multiseat, how that works on RHEL 6 and some practical observations.

COMPONENTS

To build a multiseat system you need a number of individual components:

UD-160-A

  • USB docking station like Plugable's UD-160-A or a combination of USB video card and stand alone USB hub. It is also possible to use USB docking stations from other vendors but I'm not aware of anyone who did it.
  • udlfb - a kernel driver for USB graphics adapters which use DisplayLink based chips. As of January 2011 udlfb.c is part of the mainline kernel tree and is on track for 2.6.38. On RHEL6 this can easily be built as a stand alone module. There are no issues with this package. We also use a custom patch that will draw the string "fbX" onto the green screen. This is useful for easier identification of the display. The patch can be found here.
  • Xorg - this is the standard graphics server on Linux. In RHEL 6 we have xorg-x11-server-Xorg-1.7.7-26 which works perfectly in a multiseat environment.
  • xorg-x11-drv-fbdev with extensions - Xorg driver based on the fbdev driver. The extensions add support for the X DAMAGE protocol. This is a temporary solution until Xorg adds support for the damage protocol. Our package is called xorg-x11-drv-fbdev-displaylink to avoid conflict with the stock package provided by the distribution and it installs the files in /usr/local. You can also change the compiler flags and produce a binary under a different name (say displaylink_drv.so instead of fbdev_drv.so).
  • GDM with multiseat support - GDM will manage multiple local displays and has the ability to add/remove displays dynamically. This functionality is present in versions up to 2.20 and since RHEL6 includes gdm-2.30.4-21.el6 this is a tough choice. There are several possibilities:
    1. Use older GDM, preferably from a previous RHEL release. This gives you a tested piece of software and as long as the previous release is maintained you have (at least some) opportunity of fixing bugs in this code base. However this conflicts with current GDM in the distro which is also integrated with ConsoleKit, Plymouth and PulseAudio.
    2. Use GDM and ConsoleKit that are available in RHEL6 and apply the multiseat patches available at https://bugs.freedesktop.org/show_bug.cgi?id=19333 and http://bugzilla.gnome.org/show_bug.cgi?id=536355. Those patches are quite big (around 3000 lines each) and are not yet fully integrated upstream. They also conflict with custom patches that Red Hat is shipping into these packages. Your patched packages will also conflict with the stock distro packages and you will not receive any support for that. Since ConsoleKit seems like fairly important application I'd not recommend modifying it.
    3. Use another display manager that can handle multiple displays. https://help.ubuntu.com/community/MultiseatX suggests to use KDM instead of GDM. As far as I can tell the configuration is only static and this can break any time due to the fact that USB device discovery is unpredictable and unreliable. It also lacks an alternative for gdmdynamic according to http://lists.kde.org/?l=kde-devel&m=129898381127854&w=2 which makes it a no-go for plug-and-play multiseat support. There are other less popular display managers but I haven't spend much time in research.
    4. Just for the record it is also possible that one writes a custom display manager for multiseat operations. This sounds like an overkill and there are many factors which need to be taken into account. If you have enough resources and knowledge to write a display manager you'd better give upstream a hand instead of reinventing the wheel.
    We've decided to use GDM 2.16 from RHEL5 due to the above factors. In practice it turns out that there aren't many issues with this version.
  • A GDM theme - since the GDM version we're using requires a theme which is missing in RHEL6 this is also provided as a separate package. A GDM theme is an XML file plus some images.
  • udev rules, scripts and config files - this is the glue between all the other components. Their primary job is to group the display-mouse-keyboard pairs for a given seat and start the display with the appropriate configuration settings. We also have support for PulseAudio.

RHEL6 SPECIFICS

For detailed description of multiseat configuration take a look at http://plugable.com/2009/11/16/setting-up-usb-multiseat-with-displaylink-on-linux-gdm-up-to-2-20/ or at our source code. I'm going to describe only the differences in RHEL6.

GDM, udlfb and xorg-x11-drv-fbdev-displaylink need to be compiled and installed on the system.

To build an older GDM on RHEL6 you will need to adjust some of the patches in the src.rpm package to apply cleanly and tweak the .spec file to your needs. This also includes using the appropriate version of ltmain.sh from the distro.

The udev rules and scripts are slightly different due to the different device paths in RHEL6:

SYSFS{idVendor}=="17e9", SYSFS{bConfigurationValue}=="2", RUN="/bin/echo 1 > /sys%p/bConfigurationValue"

ACTION=="add",    KERNEL=="fb*", SUBSYSTEM=="graphics", SUBSYSTEMS=="usb", PROGRAM="/usr/bin/sumu-hub-id /sys/%p/device/../", SYMLINK+="usbseat/%c/display",  RUN+="/etc/udev/scripts/start-seat %c"
ACTION=="remove", KERNEL=="fb*", SUBSYSTEM=="graphics", RUN+="/etc/udev/scripts/stop-seat %k"

KERNEL=="control*", SUBSYSTEM=="sound", BUS=="usb", PROGRAM="/usr/bin/sumu-hub-id /sys/%p/device/../../../../", SYMLINK+="usbseat/%c/sound"
KERNEL=="event*", SUBSYSTEM=="input", BUS=="usb", SYSFS{bInterfaceClass}=="03", SYSFS{bInterfaceProtocol}=="01", PROGRAM="/usr/bin/sumu-hub-id /sys/%p/device/../../../../", SYMLINK+="usbseat/%c/keyboard", RUN+="/etc/udev/scripts/start-seat %c"
KERNEL=="event*", SUBSYSTEM=="input", BUS=="usb", SYSFS{bInterfaceClass}=="03", SYSFS{bInterfaceProtocol}=="02", PROGRAM="/usr/bin/sumu-hub-id /sys/%p/device/../../../../", SYMLINK+="usbseat/%c/mouse",    RUN+="/etc/udev/scripts/start-seat %c"

We also use only /dev/event* devices for both mouse and keyboard.

The sumu-hub-id script returns the string busX-devY indicating the location of the device:

#!/bin/bash
if [ -d "$1" ]; then
    echo "bus$(cat $1/busnum)-dev$(cat $1/devnum)"
    exit 0
else
    exit 1
fi

USB device numbering is unique per bus and there isn't a global device identifier as far as I know. On systems with 2 or more USB buses this can lead to mismatch between devices/seats.

For seat/display numbering we use the number of the framebuffer device associated with the seat. This is unique, numbers start from 1 (fb0 is the text console) and are sequential unlike USB device numbers. This also ensures easy match between $DISPLAY and /dev/fbX for debugging purposes.

Our xorg.conf.sed template uses evdev as the input driver. This driver is the default in RHEL6:

Section "InputDevice"
    Identifier "keyboard"
    Driver      "evdev"
    Option      "CoreKeyboard"
    Option      "Device"        "/dev/usbseat/%SEAT_PATH%/keyboard"
    Option      "XkbModel"      "evdev"
EndSection

Section "InputDevice"
    Identifier "mouse"
    Driver      "evdev"
    Option      "CorePointer"
    Option      "Protocol" "auto"
    Option      "Device"   "/dev/usbseat/%SEAT_PATH%/mouse"
    Option      "Buttons" "5"
    Option      "ZAxisMapping" "4 5"
EndSection

We also use a custom gdm.conf file to avoid conflicts with stock packages. Only the important settings are shown:

[daemon]
AlwaysRestartServer=false
DynamicXServers=true
FlexibleXServers=0
VTAllocation=false

[servers]
0=inactive

AlwaysRestartServer=false is necessary to avoid a bug in Xorg. See below for issues description.

Audio is supported by setting $PULSE_SINK/$PULSE_SOURCE environment variables using a script in /etc/profile.d which executes after login.

SCALABILITY AND PERFORMANCE

Maximum seats:
The USB standard specifies a maximum of 127 USB devices connected to a single host controller. This means around 30 seats per USB controller depending on the number of devices connected to a USB hub. In practice you will have hard time finding a system which has that many port available. I've used Fujitsu's TX100 S1 and TX100 S2 which can be expanded to 15 or 16 USB ports using all external and internal ports and additional PCI-USB extension card.

While larger configuration are possible by using more PCI cards or intermediate hubs those are limited by the USB 2.0 transfer speed (more devices on a single hub, slower graphics) and a bug in the Linux kernel.

Space and cable length:
USB 2.0 limits the cable length to 5 meters. On the market I've found good quality cables running 4.5 meters. This means that your multiseat system needs to be confined is small physical space due to these limitations. In practice using medium sized multiseat system in a 30 square meters space is doable and fits into these limits. This is roughly the size of a class-room in a school.

You can of course use daisy chaining (up to 5 hubs) and active USB extension cords (11 meters) or USB over CAT5 cables (up to 45 meters) but all of these interfere with USB signal strength and can lead to unpredictable behavior. For example I've see errors opening USB devices when power is not sufficient or too high. Modern computer systems have built in hardware protection and shut off USB ports or randomly reboot when the current on the wire is too strong. I've seen this on a number of occasions and the fix was to completely power off and unplug the system then power it on again.

Also don't forget that USB video consumes a great deal of the limited USB 2.0 bandwidth. Depending on the workload of the system (e.g. office applications vs. multimedia) you could experience slow graphical response if using extension cords and daisy chaining.

Performance:
For regular desktop use (i.e. nothing in particular) I'd recommend using 32bit operating system. On 64bit systems objects take a lot more memory and you'll need 3-4 times more for the same workload as on 32bit. For example 16 users running Eclipse, gnome-terminal and Firefox will need less that 8GB of memory on 32bit and more than 16GB on 64bit. Python and Java are particularly known to use much more memory on 64bit.

Regular desktop usage is not CPU intensive and a modern Xeon CPU has no issues with it. One exception is Flash which always causes your CPU to choke. On multiseat that becomes even a bigger problem. If possible disable/remove Flash from the system.

Multiseat doesn't make any difference when browsing, sending e-mail, etc. You shouldn't experience issues with networking unless your workload doesn't require hi-speed connection or your bandwidth is too low. If this is the case you'd better use the USB NICs available in the docking stations and bond them together, add external PCI NICs or upgrade your networking infrastructure.

Disk performance is critical in multiseat especially because it affects the look and feel of the system and is visible by the end users. It is usually good practice to place /home on a separate partition and even on a separate disk. Also consider disabling unnecessary caching in user space applications such as Firefox and Nautilus (thumbnails and cache).

On a system with 2 x 7,2K RPM disks in BIOS RAID1 configuration and a standard RHEL6 installation (i.e. no optimizations configured) where /, swap and /home are on the same RAID array we have 15 users using GNOME, gedit, Firefox, gnome-terminal and gcc. The performance is comparable to stand alone desktop with occasional spikes which cause GNOME to freeze for a second or two. It is expected that disabling unnecessary caching will make things better.

Depending on the workload (reads vs. writes) you should consider different RAID levels, file system types and settings and changing disk parameters. A good place to start is the "Storage Administration Guide" and "I/O Tuning Guide" at http://docs.redhat.com.

KNOWN ISSUES

  • Bug 28682 - input drivers support limited device numbers (EVDEV_MINORS is 32) - this bug will block you from adding more than 32 input devices of the same type. For multiseat that means 32 devices which are handled by the event driver which includes mice, keyboards, joystick and special platform events such as the Reboot/Poweroff buttons. This limits the available seats to around 15.
  • Bug 679122 - gnome-volume-control: Sound at 100% and no sound output - upon first login the user will not hear any sound regardless of the fact that the volume control application shows volume is at 100%.
  • Bug 682562 - gnome-volume-control doesn't respect PULSE_SINK/PULSE_SOURCE - the volume control application will not behave correctly and may confuse users.
  • Xorg will cause 100% CPU usage after logout - this is due to several factors. The initial multiseat configuration had a problem with input duplication. This was fixed by removing "-sharevts -novtswitch" from the X start line and substituting a specific VT - "vt07". This works fine unless one of the users logs out of their GNOME session. After that GDM will kill and restart it's process and new Xorg process will be spawned. The restarted instance will loop endlessly executing the following code: wzxhzdk:4 If you search on the Internet you will find plenty of bug reports related to this code block. The problem is in Xorg which doesn't properly handle the situation where it can't take control over the terminal. The solution is to not restart Xorg after user session ends. This is done by setting AlwaysRestartServer=false in gdm.conf.
  • No integration with SELinux and ConsoleKit - while configuring SELinux in Permissive mode is easy workaround there's no easy workaround for ConsoleKit. Newer GDM versions register the user session with ConsoleKit and integrate that into the desktop. Missing integration means that some things will fail. For example NetworkManager will not allow the user to connect to a VPN connection because it thinks this user is not logged in: wzxhzdk:5
  • No ACLs for external USB flash drives - this is missing upstream and is supposed to land in ConsoleKit. When a user plugs their USB flash drive on a multiseat system GNOME will try to mount it automatically. If there are multiple users logged in this will either fail or all of them will be able to access the flash drive.

PICTURES AND VIDEO

Pictures from one of our deployments can be found on Facebook (no login required): http://www.facebook.com/album.php?aid=54571&id=180150925328433. A demonstration video from the same deployment can be found at http://www.youtube.com/watch?v=7GYbCDGTz-4

If you are interested in commercial support please contact me!

FUTURE

In the open source world everything is changing and multiseat is no exception. While GDM and ConsoleKit patches are not yet integrated upstream there's a new project called systemd which aims at replacing the SysV init scripts system. It already has several configuration files for multiseat and I expect it will influence multiseat deployments in the future. Systemd will be available in Fedora 15.

There are comments.


Page 16 / 16