Tag Fedora

Tip: Try F10 When Editing grub2 Menu in EFI Mode

When editing the grub2 menu (especially in EFI mode) it tells you to press Ctrl-x to save your changes and continue the boot process. However this doesn't work on my MacBook Air, see rhbz#1253637, and maybe some other platforms. If this is the case try pressing F10 instead. It works for me!

There are comments.

Call for Ideas: Graphical Test Coverage Reports

If you are working with Python and writing unit tests chances are you are familiar with the coverage reporting tool. However there are testing scenarios in which we either don't use unit tests or maybe execute different code paths(test cases) independent of each other.

For example, this is the case with installation testing in Fedora. Because anaconda - the installer is very complex the easiest way is to test it live, not with unit tests. Even though we can get a coverage report (anaconda is written in Python) it reflects only the test case it was collected from.

coverage combine can be used to combine several data files and produce an aggregate report. This can tell you how much test coverage you have across all your tests.

As far as I can tell Python's coverage doesn't tell you how many times a particular line of code has been executed. It also doesn't tell you which test cases executed a particular line (see PR #59). In the Fedora example, I have the feeling many of our tests are touching the same code base and not contributing that much to the overall test coverage. So I started working on these items.

I imagine a script which will read coverage data from several test executions (preferably in JSON format, PR #60) and produce a graphical report similar to what GitHub does for your commit activity.

See an example here!

The example uses darker colors to indicate more line executions, lighter for less executions. Check the HTML for the actual numbers b/c there are no hints yet. The input JSON files are here and the script to generate the above HTML is at GitHub.

Now I need your ideas and comments!

What kinds of coverage reports are you using in your job ? How do you generate them ? How do they look like ?

There are comments.

Videos from Bulgaria Web Summit 2015

We're full

Bulgaria Web Summit 2015 is over. The event was incredible and I had a lot of fun moderating the main room. We had many people coming from other countries and I've made lots of new friends. Thank you to everyone who attended!

You can find video recordings of all talks in the main room (in order of appearance) below:

Hope to see you next time in Sofia!

Mean while I learned about DEVit in Thessaloniki in May and another one in Zagreb in October. See you there :)

There are comments.

How to Find if LVM Volume is Thinly Provisioned

The latest versions of Red Hat Enterprise Linux, CentOS and Fedora all support LVM thin provisioning. Here's how to tell if a logical volume has been thinly provisioned or not.

Using lvs to display volume information look under the Attr column. Attribute values have the following meaning:

The lv_attr bits are:

1 Volume type: (C)ache, (m)irrored, (M)irrored without initial sync, (o)rigin, (O)rigin with merging snapshot, (r)aid, (R)aid without initial sync, (s)napshot, merging (S)napshot, (p)vmove, (v)irtual, mirror or raid (i)mage, mirror or raid (I)mage out-of-sync, mirror (l)og device, under (c)onversion, thin (V)olume, (t)hin pool, (T)hin pool data, raid or pool m(e)tadata or pool metadata spare.

This is how lvs looks like when you have a regular LVM setup:

# lvs
  LV   VG              Attr       LSize  Pool Origin Data%  Meta%  Move Log Cpy%Sync Convert
  root rhel_dhcp70-183 -wi-ao---- 17,47g                                                    
  swap rhel_dhcp70-183 -wi-ao----  2,00g

When using LVM thin provisioning you're looking for the left-most attribute bit to be V, t or T. Here's an example:

# lvs
  LV     VG              Attr       LSize  Pool   Origin Data%  Meta%  Move Log Cpy%Sync Convert
  pool00 rhel_dhcp71-101 twi-aotz-- 14,55g               7,52   3,86                            
  root   rhel_dhcp71-101 Vwi-aotz-- 14,54g pool00        7,53                                   
  swap   rhel_dhcp71-101 -wi-ao----  2,00g

There are comments.

Tip: Linux-IO default LUN is 0 instead of 1

I've been testing iBFT in KVM which worked quite well with a RHEL 6 iSCSI target and failed miserably when I switched to RHEL 7 iSCSI target.

iPXE> dhcp net0
DHCP (net0 52:54:00:12:34:56)... ok
iPXE> set keep-san 1
iPXE> sanboot iscsi:10.0.0.1:::1:iqn.2015-05.com.example:target1
Could not open SAN device: Input/output error (http://ipxe.org/1d704539)
iPXE>

The error page says

Note that the default configuration when Linux is the target is for the disk to be LUN 1.

Well this is not true for Linux-IO (targetcli). The default LUN is 0!

iPXE> sanboot iscsi:10.0.0.1:::0:iqn.2015-05.com.example:target1
Registered SAN device 0x80
Booting from SAN device 0x80

Kudos to Bruno Goncalves from Red Hat in helping me debug this issue!

There are comments.

How to Configure targetcli to Listen on IPv4 and IPv6

In order to configure targetcli to listen on both IPv4 and IPv6 one has to delete the default IPv4 portal configuration and replace it with IPv6 instead.

# targetcli 
/>
/> cd iscsi/iqn.2015-04.com.example:target1/tpg1/portals
/iscsi/iqn.20.../tpg1/portals> ls
o- portals ............................................................................................................ [Portals: 1]
  o- 0.0.0.0:3260 ............................................................................................................. [OK]
/iscsi/iqn.20.../tpg1/portals> delete 0.0.0.0 3260
Deleted network portal 0.0.0.0:3260
/iscsi/iqn.20.../tpg1/portals> create ::0
Using default IP port 3260
Created network portal ::0:3260.
/iscsi/iqn.20.../tpg1/portals> ls
o- portals ............................................................................................................ [Portals: 1]
  o- [::0]:3260 ............................................................................................................... [OK]
/iscsi/iqn.20.../tpg1/portals> exit

# netstat -antp | grep 3260
tcp6       0      0 :::3260                 :::*                    LISTEN

It appears the target is listening only on IPv6 but in fact it will also accept connections over IPv4. I've tried it.

This is a bit counter intuitive, however if you try adding the IPv6 address without removing the default IPv4 one targetcli will throw an error:

/iscsi/iqn.20.../tpg1/portals> create ::0
Using default IP port 3260
Could not create NetworkPortal in configFS.
/>

For more information about targetcli usage see my previous post How to Configure iSCSI Target on Red Hat Enterprise Linux 7.

There are comments.

How to Configure iSCSI Target on Red Hat Enterprise Linux 7

Linux-IO (LIO) Target is an open-source implementation of the SCSI target that has become the standard one included in the Linux kernel and the one present in Red Hat Enterprise Linux 7. The popular scsi-target-utils package is replaced by the newer targetcli which makes configuring a software iSCSI target quite different.

In earlier versions one had to edit the /etc/tgtd/targets.conf file and service tgtd restart. Here is an example configuration:

<target iqn.2008-09.com.example:server.target1>
    backing-store /dev/vg_iscsi/lv_lun1
    backing-store /dev/vg_iscsi/lv_lun2

    incominguser user2 secretpass23
    outgoinguser userA secretpassA
</target>

targetcli can be used either as an interactive shell or as stand alone commands. Here is an example shell session which creates a file-based disk image. Comments are provided inline:

# yum install -y targetcli
# systemctl enable target

# targetcli 

# first create a disk image with the name of disk1. All files are sparsely created.

/> backstores/fileio create disk1 /var/lib/libvirt/images/disk1.img 10G
Created fileio disk1 with size 10737418240

# create an iSCSI target. NB: this only defines the target

/> iscsi/ create iqn.2015-04.com.example:target1
Created target iqn.2015-04.com.example:target1.
Created TPG 1.
Global pref auto_add_default_portal=true
Created default portal listening on all IPs (0.0.0.0), port 3260.

# TPGs (Target Portal Groups) allow the iSCSI to support multiple complete
# configurations within one target. This is useful for complex quality-of-service
# configurations. targetcli will automatically create one TPG when the target
# is created, and almost all setups only need one.

# switch to TPG definition for our target

/> cd iscsi/iqn.2015-04.com.example:target1/tpg1

# list the contents

/iscsi/iqn.20...:target1/tpg1> ls 
o- tpg1 ..................................................................................................... [no-gen-acls, no-auth]
  o- acls ................................................................................................................ [ACLs: 0]
  o- luns ................................................................................................................ [LUNs: 0]
  o- portals .......................................................................................................... [Portals: 1]
    o- 0.0.0.0:3260 ........................................................................................................... [OK]

# create a portal aka IP:port pairs which expose the target on the network

/iscsi/iqn.20...:target1/tpg1> portals/ create
Using default IP port 3260
Binding to INADDR_ANY (0.0.0.0)
This NetworkPortal already exists in configFS.

# create logical units (LUNs) aka disks inside our target
# in other words bind the target to its on-disk storage

/iscsi/iqn.20...:target1/tpg1> luns/ create /backstores/fileio/disk1
Created LUN 0.

# disable authentication

/iscsi/iqn.20...:target1/tpg1> set attribute authentication=0
Parameter authentication is now '0'.

# enable read/write mode

/iscsi/iqn.20...:target1/tpg1> set attribute demo_mode_write_protect=0
Parameter demo_mode_write_protect is now '0'.

# Enable generate_node_acls mode. This can be thought of as 
# "ignore ACLs mode" -- both  authentication and LUN mapping
# will then use the TPG settings.

/iscsi/iqn.20...:target1/tpg1> set attribute generate_node_acls=1
Parameter generate_node_acls is now '1'.

/iscsi/iqn.20...:target1/tpg1> ls
o- tpg1 ........................................................................................................ [gen-acls, no-auth]
  o- acls ................................................................................................................ [ACLs: 0]
  o- luns ................................................................................................................ [LUNs: 1]
  | o- lun0 ..................................................................... [fileio/disk1 (/var/lib/libvirt/images/disk1.img)]
  o- portals .......................................................................................................... [Portals: 1]
    o- 0.0.0.0:3260 ........................................................................................................... [OK]

# exit or Ctrl+D will save the configuration under /etc/target/saveconfig.json

/iscsi/iqn.20...:target1/tpg1> exit
Global pref auto_save_on_exit=true
Last 10 configs saved in /etc/target/backup.
Configuration saved to /etc/target/saveconfig.json

# after creating a second target the layout looks like this

/> ls
o- / ......................................................................................................................... [...]
  o- backstores .............................................................................................................. [...]
  | o- block .................................................................................................. [Storage Objects: 0]
  | o- fileio ................................................................................................. [Storage Objects: 2]
  | | o- disk1 .................................................. [/var/lib/libvirt/images/disk1.img (10.0GiB) write-back activated]
  | | o- disk2 .................................................. [/var/lib/libvirt/images/disk2.img (10.0GiB) write-back activated]
  | o- pscsi .................................................................................................. [Storage Objects: 0]
  | o- ramdisk ................................................................................................ [Storage Objects: 0]
  o- iscsi ............................................................................................................ [Targets: 2]
  | o- iqn.2015-04.com.example:target1 ................................................................................... [TPGs: 1]
  | | o- tpg1 .................................................................................................. [gen-acls, no-auth]
  | |   o- acls .......................................................................................................... [ACLs: 0]
  | |   o- luns .......................................................................................................... [LUNs: 1]
  | |   | o- lun0 ............................................................... [fileio/disk1 (/var/lib/libvirt/images/disk1.img)]
  | |   o- portals .................................................................................................... [Portals: 1]
  | |     o- 0.0.0.0:3260 ..................................................................................................... [OK]
  | o- iqn.2015-04.com.example:target2 ................................................................................... [TPGs: 1]
  |   o- tpg1 .................................................................................................. [gen-acls, no-auth]
  |     o- acls .......................................................................................................... [ACLs: 0]
  |     o- luns .......................................................................................................... [LUNs: 1]
  |     | o- lun0 ............................................................... [fileio/disk2 (/var/lib/libvirt/images/disk2.img)]
  |     o- portals .................................................................................................... [Portals: 1]
  |       o- 0.0.0.0:3260 ..................................................................................................... [OK]
  o- loopback ......................................................................................................... [Targets: 0]


# enable CHAP and Reverse CHAP (mutual) for both discovery and login authentication

# discovery authentication is enabled under the global iscsi node

/> cd /iscsi
/iscsi> set discovery_auth enable=1
/iscsi> set discovery_auth userid=IncomingUser
/iscsi> set discovery_auth password=SomePassword1
/iscsi> set discovery_auth mutual_userid=OutgoingUser
/iscsi> set discovery_auth mutual_password=AnotherPassword2

# login authentication is enabled either under the TPG node or under ACLs

/iscsi> cd iqn.2015-04.com.example:target1/tpg1
/iscsi/iqn.20...:target1/tpg1> set attribute authentication=1
/iscsi/iqn.20...:target1/tpg1> set auth userid=IncomingUser2
/iscsi/iqn.20...:target1/tpg1> set auth password=SomePassword3
/iscsi/iqn.20...:target1/tpg1> set auth mutual_userid=OutgoingUser2
/iscsi/iqn.20...:target1/tpg1> set auth mutual_password=AnotherPassword4
/iscsi/iqn.20...:target1/tpg1> exit

Hints:

  • activating targetcli service at boot is mandatory, otherwise your configuration won’t be read after a reboot
  • if you type cd targetcli will display an interactive node tree
  • after configuration is saved you don't need to restart anything
  • the old scsi-target-utils doesn't support discovery authentication
  • targetcli allows kernel memory to be shared as a block SCSI device via the ramdisk backstore. It also supports "nullio" mode, which discards all writes, and returns all-zeroes for reads.
  • I'm having troubles configuring portals to listen both on any IPv4 addresses and any IPv6 addresses the system has. I've still not figured that out entirely.

For more information please read Chapter 25 from Red Hat's Storage Administration Guide or checkout Red Hat Enterprise Linux 7 books on Amazon.

There are comments.

SNAKE is no Longer Needed to Run Installation Tests in Beaker

This is a quick status update for one of the pieces of Fedora QA infrastructure and mostly a self-note.

Previously to control the kickstart configuration used during installation in Beaker one had to either modify the job XML in Beaker or use SNAKE (bkr workflow-snake) to render a kickstart configuration from a Python template.

SNAKE presented challenges when deploying and using beaker.fedoraproject.org and is virtually unmaintained.

I present the new bkr workflow-installer-test which uses Jinja2 templates to generate a kickstart configuration when provisioning the system. This is already available in beaker-client-0.17.1.

The templates make use of all Jinja2 features (as far as I can tell) so you can create very complex ones. You can even include snippets from one template into another if required. The standard context that is passed to the template is:

  • DISTRO - if specified, the distro name
  • FAMILY - as returned by Beaker server, e.g. RedHatEnterpriseLinux6
  • OS_MAJOR and OS_MINOR - also taken from Beaker server. e.g. OS_MAJOR=6 and OS_MINOR=5 for RHEL 6.5
  • VARIANT - if specified
  • ARCH - CPU architecture like x86_64
  • any parameters passed to the test job with --taskparam. They are processed last and can override previous values.

Installation related tests at fedora-beaker-tests have been updated with a ks.cfg.tmpl templates to use with this new workflow.

This workflow also has the ability to return boot arguments for the installer if needed. If any, they should be defined in a {% block kernel_options %}{% endblock %} block inside the template. A simpler variant is to define a comment line that stars with ## kernel_options:

There are still a few issues which need to be fixed before beaker.fedoraproject.org can be used by the general public though. I will be writing another post about that so stay tuned.

There are comments.

Book Review - Last 3 Months

Hello folks, this is my book list for the past 3 months. It ranges from tech and start-up related to Japanese and kid stories. Here's my quick review.

Lean UX

Lean UX: Applying Lean Principles to Improve User Experience is the second book I read on the subject after first reading UX for Lean Startups.

It is published before UX for Lean Startups and is much more about principles than any practical methods. Honestly I'm not sure if I took any real value out of it. Maybe if I had read these two books in reverse order it would have been better.

The Hacienda - How Not to Run a Club

The Hacienda: How Not to Run a Club by Peter Hook is one of my favorites. It covers a great deal of music and clubland history, depicts crazy parties and describes the adventure of owning one of the most popular nightclubs in the world. All of that while struggling to make a buck and pouring countless pounds into a black hole.

The irony is The Hacienda became a legendary place only after it had closed down and later on being demolished.

A must read for anyone who is considering business in the entertainment industry or wants to read a piece of history. My favorite quote of the book:

Years after, Tony Wilson found himself sitting opposite Madonna at dinner.

‘I eventually plucked up the courage to look across the table to Madonna and ask, “Are you aware that the first place you appeared outside of New York was our club in Manchester?”

‘She gave me an ice-cold stare and said, “My memory seems to have wiped that.”’

Simple Science Experiments

Simple Science Experiments by Hans Jürgen Press is a very old book listing 200 experiments which you can do at home using household materials. It is great for teaching basic science to children. The book is very popular and is available in many languages and editions - just search for it.

I used to have this as a kid and was able to purchase the 1987 Bulgarian edition at an antique bookstore in Varna two months ago.

Ronia, the Robber's Daughter

Decided to experiment a little bit and found Ronia, the Robber's Daughter. It's a child's book telling the story of two kids whose fathers are rival robbers. The book is an easy read (2-3 hrs before bed time) with stories of magic woods, dwarfs and scary creatures mixed with human emotions and the good vs. bad theme.

Japanese Short Stories

I've managed to find a 1973 compilation of Japanese short stories translated into Bulgarian. Also one of my favorite books.

If I'm not mistaken these are classic Japanese authors, nothing modern or cutting edge. Most of the action happens during the early 1900s as far as I can tell. What impresses me most is the detailed description of nature and surrounding details in all of the stories.

The Singularity Is Near

I've also started The Singularity Is Near: When Humans Transcend Biology by Ray Kurzweil.

It's a bit hard to read because the book is full of so many technical details about genetics, nanotechnology, robotics and AI.

Ray depicts a bright future where humans will transcend our biological limitations and essentially become pure intelligence. Definitely a good read and I will tell you more about it when I finish it.


What have you been reading since January ? I'd love to see your book list or connect on Goodreads.

There are comments.

OpenSource.com article - 10 steps to migrate your closed software to open source

Difio is a Django based application that keeps track of packages and tells you when they change. Difio was created as closed software, then I decided to migrate it to open source ....

Read more at OpenSource.com

Btw I'm wondering if Telerik will share their experience opening up the core of their Kendo UI framework on the webinar tomorrow.

There are comments.

Spoiler: How to Open Source Existing Proprietary Code in 10 Steps

We've heard about companies opening up their proprietary software products, this is hardly news nowadays. But have you wondered what it is like to migrate production software from closed to open source? I would like to share my own experience about going open source as seen from behind the keyboard.

Difio was recently open sourced and the steps to go through were:

  • Simplify - remove everything that can be deleted
  • Create self contained modules aka re-organize the file structure
  • Separate internal from external modules
  • Refactor the existing code
  • Select license and update copyright
  • Update 3rd party dependencies to latest versions and add requirements.txt
  • Add README and verbose settings example
  • Split difio/ into its own git repository
  • Test stand alone deployments on fresh environment
  • Announce

Do you want to know more? Use the comments and tell me what exactly! I'm writing a longer version of this article so stay tuned!

There are comments.

Positive Biological Effects of Open Source on Humans

Recently I watched a talk by Simon Sinek about leadership. He talks about Endorphins, Dopamine, Serotonin and Oxytocin and how they make us feel and act a particular way. Then I though - maybe that's why working in the open source field always felt great and natural to me. Maybe we humans are programmed to follow the open source way!

This article scratches the surface where body chemistry and open source intersect. I hope it will help both volunteers and community managers get an insight of the driving forces in our bodies, how they relate to the open source world and promt further exploration. By seeking to better understand the positive effects and avoid the negative ones we can become better contributors and leaders which ultimately helps our communities.

Endorphins

Endorphins stand for endurance. Their job is to mask physical pain and it has been suggested to have evolutionary roots based on the theory that they helped with the survival of early humans. Athletes experience so called Runner's high.

In the open source world one may work on a feature or task for hours and hours without feeling exhausted. The task itself keeps you going and excited. This is endorphins going through your brain.

In software one may experience an endorphin rush during public release days for example. For large projects like Fedora the release process includes many steps and may take several hours. During all that time the release engineer is usually available regardless of their native time zone.

Effects of endorphins could potentially increase the likelihood of injury or extreme exhaustion, as pain sensation could be more easily ignored. Work and rest cycles need to be properly balanced.

Dopamine

This is the feeling when we achieve our goals or found something we were looking for. Dopamine helps us get things done! This is why we're told to write down our goals and then cross them off. It makes people more productive.

Getting dopamine through open source is very easy - all you need to do is fix a bug, then another one, and another one, and another one ... After every task you complete the body gets a small dopamine fix. "Release early, release often ..." and you get your fix :).

Dopamine however is highly addictive and destructive if unbalanced. It has the same negative effects as any other addiction - alcohol, drugs, etc. Be aware of that and don't fall for the performance trap.

Endorphins and Dopamine are so called selfish chemicals. You can get them without external help. The next two are the social chemicals.

Serotonin

It is responsible for feelings of pride and status and assessing social rank. Serotonin is produced when you are recognized for achievements by the open source community or credited by somebody (e.g. Johnny Bravo mentioned a great idea on IRC today).

As a contributor one may work on items which will help you get recognition but ultimately this is not for you to decide. However practice shows that credit and recognition are relatively easy to get in the open source world provided one has contributed to make the project and the community better in some sort.

In software this is being granted commit rights to a repository, being in the top spot of some metrics, having your blog read by other members or simply people asking for help or what you think about some topic.

Serotonin is considered the leadership chemical. As one becomes a leader recognized by the community there's a catch - the more your status goes up the more work you have to do. The more people recognize you as the leader the more they expect you to sacrifice yourself in case it all goes Pete Tong. If you are not ready to step up find a more suitable place in the community instead.

Oxytocin

Oxytocin is responsible for feelings of love, trust and friendship. It makes us feel safe. It is also very good for the body because it makes us healthier, boosts our immune system, increases ability to solve problems and increases creativity.

One way to get Oxytocin is by physical touch - e.g. a hand shake. This is probably one of the reasons beer gatherings are so popular among open source developers. Working digitally we need a way to reinforce human bonds in our communities. Knowing the person on the other end of the wire ultimately makes us feel safer. If you are in open source just go for that conference or a local beer bash you wanted to go - it is good for you (but don't get drunk).

Another way to get Oxytocin is by performing or witnessing acts of human generosity. This comes natural in open source world where people give up their free time and energy to work towards a shared goal. Just by working in an open source environment you get all that goodness for you.

The best thing about Oxytocin is that it is not addictive and slowly builds up in the body. The bad side is that it takes a while to build up. This is why you have to stay a little longer in open source before it starts feeling safe and welcoming.

Cortisol

The last chemical Simon talks about is Cortisol. It is bad, very bad. It will crash your body. Cortisol means stress. It is designed to keep humans (and animals) alive by hyper tuning our senses in case of danger. Trouble is you are not supposed to have it in your body for long periods of time because it shuts down non-essential systems to deliver that extra energy.

Luckily most open source projects are not stressful and I think can be considered a safe place to work in. In the end one can always shift to another role or move to another project if it becomes too pressing.

By committing to help another member or perform service to the community our bodies get all the good stuff and beat the negative. Service to a community is exactly what open source does! See, humans are programmed to live and work the open source way!

There are comments.

How Do You Test Thai Scalable Fonts

Recently I wrote about testing fonts. I finally managed to get an answer from the authors of thai-scalable-fonts.

What is your approach for testing Fonts-TLWG?

It's not automated test. What it does is generate PDF with sample texts at several sizes (the waterfall), pangrams, and glyph table. It needs human eyes to investigate.

What kind of problems is your test suite designed for ?

  • Shaping
  • Glyph coverage
  • Metrics

We also make use of fontforge features to make spotting errors easier, such as - Show extremas - Show almost vertical/horizontal lines/curves

Theppitak Karoonboonyanan, Fonts-TLWG

There are comments.

How do You Test Fonts

Previously I mentioned about testing fonts but didn't have any idea how this is done. Authors Khaled Hosny of Amiri Font and Steve White of GNU FreeFont provided valuable insight and material for further reading. I've asked them:

  • What is your approach for testing ?
  • What kind of problems is your test suite designed for ?

Here's what they say:

Currently my test suite consists of text strings (or lists of code points) and expected output glyph sequences and then use HarfBuzz (through its hb-shape command line tool) to check that the fonts always output the expected sequence of glyphs, sometimes with the expected positioning as well. Amiri is a complex font that have many glyph substitution and positioning rules, so the test suite is designed to make sure those rules are always executed correctly to catch regressions in the font (or in HarfBuzz, which sometimes happens since the things I do in my fonts are not always that common).

I think Lohit project do similar testing for their fonts, and HarfBuzz itself has a similar test suite with a bunch of nice scripts (though they are not installed when building HarfBuzz, yet[1]).

Recently I added more kinds of tests, namely checking that OTS[2] sanitizes the fonts successfully as this is important for using them on the web, and a test for a common mistakes I made in my feature files that result in unexpected blank glyphs in the fonts.

  1. https://github.com/behdad/harfbuzz/pull/12
  2. https://github.com/khaledhosny/ots

Khaled Hosny, Amiri Font

The answer is complicated. I'll do what I can to answer.

First, the FontForge application has a "verification" function which can be run from a script, and which identifies numerous technical problems.

FontForge also has a "Find Problems" function that I run by hand.

The monospaced face has special restrictions, first that all glyphs of non-zero width must be of the same width, and second, that all glyphs lie within the vertical bounds of the font.

Beside this, I have several other scripts that check for a few things that FontForge doesn't (duplicate names, that glyph slots agree with Unicode code within Unicode character blocks).

Several tests scripts have yet to be uploaded to the version control system -- because I'm unsure of them.

There is a more complicated check of TrueType tables, which attempts to find cases of tables that have been "shadowed" by the script/language specification of another table. This is helpful, but works imperfectly.

ALL THAT SAID,

In the end, every script used in the font has to be visually checked. This process takes me weeks, and there's nothing systematic about it, except that I look at printout of documents in each language to see if things have gone awry.

For a few documents in a few languages, I have images of how text should look, and can compare that visually (especially important for complex scripts.)

A few years back, somebody wrote a clever script that generated images of text and compared them pixel-by-pixel. This was a great idea, and I wish I could use it more effectively, but the problem was that it was much too sensitive. A small change to the font (e.g. PostScript parameters) would cause a small but global change in the rendering. Also the rendering could vary from one version of the rendering software to another. So I don't use this anymore.

That's all I can think of right now.

In fact, testing has been a big problem in getting releases out. In the past, each release has taken at least two weeks to test, and then another week to fix and bundle...if I was lucky. And for the past couple of years, I just haven't been able to justify the time expenditure. (Besides this, there are still a few serious problems with the fonts--once again, a matter of time.)

Have a look at the bugs pages, to get an idea of work being done.

http://savannah.gnu.org/bugs/?group=freefont

Steve White, GNU FreeFont

I'm not sure if ImageMagic or PIL can help solve the rendering and compare problem Steve is talking about. They can definitely be used for image comparison so maybe coupled with some rendering library it's worth a quick try.

If you happen to know more about fonts, please join me in improving overall test coverage in Fedora by designing test suites for fonts packages.

There are comments.

Last Week in Fedora QA

Here are some highlights from the past week discussions in Fedora which I found interesting or participated in.

Call to Action: Improving Overall Test Coverage in Fedora

I can not stress enough how important it is to further improve test coverage in Fedora! You can help too. Here's how:

  • Join upstream and create a test suite for a package you find interesting;
  • Provide patches - first patch came in less than 30 minutes of initial announcement :);
  • Review packages in the wiki and help identify false negatives;
  • Forward to people who may be interested to work on these items;
  • Share and promote in your local open source and developer communities;

Auto BuildRequires

Auto-BuildRequires is a simple set of scripts which compliments rpmbuild by automatically suggesting BuildRequires lines for the just built package.

It would be interesting to have this integrated into Koji and/or continuous integration environment and compare the output between every two consecutive builds (iow older and newer package versions). It sounds like a good way to identify newly added or removed dependencies and update the package specs accordingly.

How To Test Fonts Packages

This is exactly what Christopher Meng asked and frankly I have no idea.

I've come across a few fonts packages (amiri-fonts, gnu-free-fonts and thai-scalable-fonts) which seem to have some sort of test suites but I don't know how they work or what type of problems they test for. On top of that all three have a different way of doing things (e.g. not using a standardized test framework or a variation of such).

I'll keep you posted on this once I manage to get more info from upstream developers.

Is URL Field in RPM Useless

So is it? Opinions here differ from totally useless to "don't remove it, I need it". However I run a small test and from 2574 RPMs on the source DVD there is around 40% of "something different than HTTP 200 OK". This means 40% potentially broken URLs!

The majority are responses in the 3XX range and only less than 10% are actual errors (4XX, 5XX, missing URLs or connection errors).

It will be interesting to see if this can be removed from rpm altogether. I don't think it will happen soon but if we don't use it why have it there?

My script for the test is here.

There are comments.

Call to Action: Improving Overall Test Coverage in Fedora

Around Christmas 2013 I said

... it looks like on average 30% of the packages execute their test suites at build time in the %check section and less than 35% have test suites at all! There’s definitely room for improvement and I plan to focus on this during 2014!

I've recently started working on this goal by first identifying potential offending packages and discussing the idea on Fedora's devel, packaging and test mailing lists.

May I present you nearly 2000 packages which need your love:

The intent for these pages is to serve as a source of working material for Fedora volunteers.

How Can I Help

  • Join upstream and create a test suite for a package you find interesting;
  • Provide patches - first patch came in less than 30 minutes of initial announcement :);
  • Review packages in the wiki and help identify false negatives;
  • Forward to people who may be interested to work on these items;
  • Share and promote in your local open source and developer communities;

Important

If you would like to gain some open source practice and QA experience I will happily provide mentorship and general help so you can start working on Fedora. Just ping me!

There are comments.

Skip or Render Specific Blocks from Jinja2 Templates

I wasn't able to find detailed information on how to skip rendering or only render specific blocks from Jinja2 templates so here's my solution. Hopefully you find it useful too.

With below template I want to be able to render only kernel_options block as a single line and then render the rest of the template excluding kernel_options.

base.j2
{% block kernel_options %}
console=tty0
    {% block debug %}
        debug=1
    {% endblock %}
{% endblock kernel_options %}

{% if OS_MAJOR == 5 %}
key --skip
{% endif %}

%packages
@base
{% if OS_MAJOR > 5 %}
%end
{% endif %}

To render a particular block you have to use the low level Jinja API template.blocks. This will return a dict of block rendering functions which need a Context to work with.

The second part is trickier. To remove a block we have to create an extension which will filter it out. The provided SkipBlockExtension class does exactly this.

Last but not least - if you'd like to use both together you have to disable caching in the Environment (so you get a fresh template every time), render your blocks first, configure env.skip_blocks and render the entire template without the specified blocks.

jinja2-render
#!/usr/bin/env python

import os
import sys
from jinja2.ext import Extension
from jinja2 import Environment, FileSystemLoader


class SkipBlockExtension(Extension):
    def __init__(self, environment):
        super(SkipBlockExtension, self).__init__(environment)
        environment.extend(skip_blocks=[])

    def filter_stream(self, stream):
        block_level = 0
        skip_level = 0
        in_endblock = False

        for token in stream:
            if (token.type == 'block_begin'):
                if (stream.current.value == 'block'):
                    block_level += 1
                    if (stream.look().value in self.environment.skip_blocks):
                        skip_level = block_level

            if (token.value == 'endblock' ):
                in_endblock = True

            if skip_level == 0:
                yield token

            if (token.type == 'block_end'):
                if in_endblock:
                    in_endblock = False
                    block_level -= 1

                    if skip_level == block_level+1:
                        skip_level = 0


if __name__ == "__main__":
    context = {'OS_MAJOR' : 5, 'ARCH' : 'x86_64'}

    abs_path  = os.path.abspath(sys.argv[1])
    dir_name  = os.path.dirname(abs_path)
    base_name = os.path.basename(abs_path)

    env = Environment(
                loader = FileSystemLoader(dir_name),
                extensions = [SkipBlockExtension],
                cache_size = 0, # disable cache b/c we do 2 get_template()
            )

    # first render only the block we want
    template = env.get_template(base_name)
    lines = []
    for line in template.blocks['kernel_options'](template.new_context(context)):
        lines.append(line.strip())
    print "Boot Args:", " ".join(lines)
    print "---------------------------"

    # now instruct SkipBlockExtension which blocks we don't want
    # and get a new instance of the template with these blocks removed
    env.skip_blocks.append('kernel_options')
    template = env.get_template(base_name)
    print template.render(context)
    print "---------------------------"

The above code results in the following output:

$ ./jinja2-render ./base.j2 
Boot Args: console=tty0 debug=1 
---------------------------

key --skip

%packages
@base
---------------------------

Teaser: this is part of my effort to replace SNAKE with a client side kickstart template engine for Beaker so stay tuned!

There are comments.

7 Years and 1400 Bugs Later as Red Hat QA

Today I celebrate my 7th year working at Red Hat's Quality Engineering department. Here's my story!

Platform QE

On a cold winter Friday in 2007 I left my job as a software developer in Sofia, packed my stuff together, purchased my first laptop and on Sunday jumped the train to Brno to join the Release Test Team at Red Hat. Little did I know what it was all about. When I was offered the position I was on a very noisy bus and had to pick between two positions. I didn't quite understood what were the options and just picked the second one. Luckily everything turned out great and continues to this day.

I'm sharing my experience and highlighting some bugs which I've found. Hopefully you will find this interesting and amusing. If you are a QA engineer I urge you to take a look at my public bug portfolio, dive into details, read the comments and learn as much as you can.

What do I do exactly

From all QE teams in Red Hat, Release Test Team is the first one and last one to test a release. The team has both technical function and a more managerial one. Our focus is on the core Red Hat Enterprise Linux product. Unfortunately I can't go into much details because this is not a public facing unit. I will limit myself to public and/or non-sensitive information.

We are the first to test a new nightly build or a snapshot of the upcoming RHEL release. If the tree is installable other teams take over and do their magic. At the end when bits are published live we're the last to verify that content is published where it is expected to be. In short this is covering the work of the release engineering team which is to build a product and publish the contents for consumption.

The same principles apply to Fedora although the engagement here is less demanding.

Personally I have been and continue to be responsible for Red Hat Enterprise Linux 5 family of releases. It's up to me to give the go ahead for further testing or request a re-spin. This position also has the power to block and delay the GA release if not happy with testing or there is a considerable risk of failure until things are sorted out.

Like in other QA teams I create test plan documents, write test case scenarios, implement test automation scripts (and sometimes tools), regularly execute said test plans and test cases, find and report any new bugs and verify old ones are fixed. Most importantly make sure RHEL installs and is usable for further testing :).

Sometimes I have to deal with capacity planning and as RHEL 5 installation test lead I have to organize and manage the entire installation testing campaign for that product.

My favorite testing technique is exploratory testing.

Stats and Numbers

It is hard (if not impossible) to measure QA work with numbers alone but here are some interesting facts about my experience so far.

  • Nearly 1400 bugs filed (1390 at the time of writing);
  • Reported bugs across 32 different products. Top 3 being RHEL 6, RHEL 5 and Fedora (1000+ bugs);
  • Top 3 components for reporting bugs against: anaconda, releng, kernel;
  • Nearly 100 bugs filed in my first year 2007;
  • The 3 most productive years being 2010, 2009, 2011 (800 + bugs);
  • Filed 200 bugs/year which is about 1 bug/day considering holidays;
  • 35th top bug reporter (excluding robot accounts). I was in top 10 a few years back;

Many of the bugs I report are private so if you'd like to know more stats just ask me and I'll see what I can do.

2007

My very first bug is RHBZ #231860(private) which is about the graphical update tool Pup which used to show the wrong number of available updates.

Then I've played with adding Dogtail support to Anaconda. While initially this was rejected (Fedora 6/7), it was implemented few years later (Fedora 9) and then removed once again during the big Anaconda rewrite.

I've spent my time working extensively on RHEL 5 battling with multi-lib issues, SELinux denials and generally making the 5 family less rough. Because I was still on-boarding I generally worked on everything I could get my hands on and also did some work on RHEL3-U9 (latest release before EOL) and some RHEL4-U6 testing.

With ia64 on RHEL3 I found a corner case kernel bug which flooded the serial console with messages and caused a multi-CPU system to freeze.

In 2008 Time went backwards

My first bug in 2008 is RHBZ #428280. glibc introduced SHA-256/512 hashes for hashing passwords with crypt but that wasn't documented.

UPDATE 2014-02-21 While testing 5.1 to 5.2 updates I found RHBZ #435475 - a severe performance degradation in the package installation process. Upgrades took almost twice as much time to complete, rising from 4 hours to 7 hours depending on hardware and package set. This was a tough one to test and verify. END UPDATE

While dogfooding the 5.2 beta in March I hit RHBZ #437252 - kernel: Timer ISR/0: Time went backwards. To this date this is one of my favorite bugs with a great error message!

Removal of a hack in RPM led to file conflicts under /usr/share/doc in several packages: RHBZ #448905, RHBZ #448906, RHBZ #448907, RHBZ #448909, RHBZ #448910, RHBZ #448911 which is also the first time I happen to file several bugs in a row.

ia64 couldn't boot with encrypted partitions - RHBZ #464769, RHEL 5 introduced support for ext4 - RHBZ #465248 and I've hit a fontconfig issue during upgrades - RHBZ #469190 which continued to resurface occasionally during the next 5 years.

This is the year when I took over responsibility for the general installation testing of RHEL 5 from James Laska and will continue to do so until it reaches end-of-life!

I've also worked on RHEL 4, Fedora and even the OLPC project. On the testing side of things I've participated in testing Fedora networking on the XO hardware and worked on translation and general issues.

2009 - here comes RHEL 6

This year starts my 3 most productive years period.

The second bug reported this year is RHBZ #481338 which also mentions one of my hobbies - wrist watches. While browsing a particular website Xorg CPU usage rose to 100%. I've seen a number of these through the years and I'm still not sure if its Xorg or Firefox or both to blame. And I still see my CPU usage go to 100% just like that and drain my battery. I'm open to suggestions how to test and debug what's going on as it doesn't happen in a reproducible fashion.

I happened to work on RHEL 4, RHEL 5, Fedora and the upcoming RHEL 6 releases and managed to file bugs in a row not once but twice. I wish I was paid per bug reported back then :).

The first series was about empty debuginfo packages with both empty packages which shouldn't have existed at all (e.g. redhat-release) and missing debuginfo information for binary packages (e.g. nmap).

The second series is around 100 bugs which had to do with the texinfo documentation of packages when installed with --excludedocs. The first one is RHBZ #515909 and the last one RHBZ #516014. While this works great for bumping up your bug count it made lots of developers unhappy and not all bugs were fixed. Still the use case is valid and these were proper software errors. It is also the first time I've used a script to file the bugs automatically and not by hand.

Near the end of the year I've started testing installation on new hardware by the likes of Intel and AMD before they hit the market. I had the pleasure to work with the latest chipsets and CPUs, even sometime pre-release versions and make sure Red Hat Enterprise Linux installed and worked properly on them. I've stopped doing this last year to free up time for other tasks.

2010 - one bug a day keeps developers at bay :)

My most productive year with 1+ bugs per day.

2010 starts with a bug about file conflicts (private one) and continues with the same narrative throughout the year. As a matter of fact I did a small experiment and found around 50000 (you read that right, fifty thousand) potentially conflicting files, mostly between multi-lib packages, which were being ignored by RPM due to its multi-lib policies. However these were primarily man pages or documentation and most of them didn't get fixed. The proper fix would have been to introduce a -docs sub-package and split these files from the actual binaries. Fortunately the world migrated to 64bit only and this isn't an issue anymore.

By that time RHEL 6 development was running at its peak capacity and there were Beta versions available. Almost the entire year I've been working on internal RHEL 6 snapshots and discovering the many new bugs introduced with tons of new features in the installer. Some of the new features included better IPv6 support, dracut and KVM.

An interesting set of bugs from September are the rpmlint errors and warnings ones, for example RHBZ #634931. I just run the most basic test tool against some packages. It generated lots of false negatives but also revealed bugs which were fixed.

Although there were many bugs filed this year I don't see any particularly interesting ones. It's been more like lots of work to improve the overall quality than exploring edge cases and finding interesting failures. If you find a bug from this period that you think is interesting I will comment on it.

2011 - Your system may be seriously compromised

This is the last year of my 3 year top cycle.

It starts with RHBZ #666687 - a patch for my crappy printer-scanner-coffee maker which I've been carrying around since 2009 when I bought it.

I was still working primarily on RHEL 6 but helped test the latest RHEL 4 release before it went end-of-life. The interesting thing about it was that unlike other released RHEL4-U9 was not available on installation media but only as an update from RHEL4-U8. This was a great experience which you happen to see every 4 to 5 years or so.

Btw I've also led the installation testing effort and RTT team through the last few RHEL 4 releases but given the product was approaching EOL there weren't many changes and things went smoothly.

A minor side activity was me playing around with USB Multi-seat and finding a few bugs here and there along the way.

Another interesting activity in 2011 was proof-reading the entire product documentation before its release which I can now relate to the Testing Documentation talk at FOSDEM 2014.

In 2011 I've started using the cloud and most notably Red Hat's OpenShift PaaS service. First internally as an early adopter and later externally after the product was announced to the public. There are a few interesting bugs here but they are private and I'm not at liberty to share although they've all been fixed since then.

An interesting bug with NUMA, Xen and ia64 (RHBZ #696599 - private) had me and devel banging our heads against the wall until we figured out that on this particular system the NUMA configuration was not suitable for running Xen virtualization.

Can you spot the problem here ?

try:
    import kickstartGui
except:
    print (_("Could not open display because no X server is running."))
    print (_("Try running 'system-config-kickstart --help' for a list of options."))
    sys.exit(0)

Be honest and use the comments form to tell me what you've found. If you struggled then see RHBZ #703085 and come back again to comment. I'd love to hear from you.

What do you do when you see an error message saying: Your system may be seriously compromised! /usr/sbin/NetworkManager tried to load a kernel module. This is the scariest error message I've ever seen. Luckily its just SELinux overreacting, see RHBZ #704090.

2012 is in the red zone

While the number of reported bugs dropped significantly compared to previous years this is the year when I've reported almost exclusively high priority and urgent bugs, the first one being RHBZ #771901.

RHBZ #799384(against Fedora) is one of the rare cases when I was able to contribute (although just by raising awareness) to localization and improved support for Bulgarian and Cyrillic. The other one case was last year. Btw I find it strange that although Cyrillic was invented by Bulgarians we didn't (or still don't) have a native font co-maintainer. Somebody please step up!

The red zone bugs continue to span till the end of the year across RHEL 5, 6 and early cuts of RHEL 7 with a pinch of OpenShift and some internal and external test tools.

In 2013 Bugzilla hit 1 million bugs

The year starts with a very annoying and still not fixed bug against ABRT. It's very frustrating when the tool which is supposed to help you file bugs doesn't work properly, see RHBZ #903591. It's a known fact that ABRT has problems and for this scenario I may have a tip for you.

RHBZ #923416 - another one of these 100% CPU bugs. As I said they happen from time to time and mostly go by unfixed or partially fixed because of their nature. Btw as I'm writing this post and have a few tabs open in Firefox it keeps using between 15% and 20% CPU and the CPU temperature is over 90 degrees C. And all I'm doing is writing text in the console. Help!

RHBZ #967229 - a minor one but reveals an important thing - your output (and input for that matter) methods may be producing different results. Worth testing if your software supports more than one.

This year I did some odd jobs working on several of Red Hat's layered products mainly Developer Toolset. It wasn't a tough job and was a refreshing break away from the mundane installation testing.

While I stopped working actively on the various RHEL families which are under development or still supported I happened to be one of top 10 bug reporters for high/urgent priority bugs for RHEL 7. In appreciation Red Hat sent me lots of corporate gifts and the Platform QE hoodie pictured at the top of the page. Many thanks!

In the summer Red Hat's Bugzilla hit One Million bugs. The closest I come to this milestone is RHBZ #999941.

I finally managed to transfer most of my responsibilities to co-workers and joined the Fedora QA team as a part-time contributor. I had some highs and lows with Fedora test days in Sofia as well. Good thing is I scored another 15 bugs across the virtualization stack and GNOME 3.10.

The year wraps up with another series of identical bugs, RHBZ #1024729 and RHBZ #1025289 for example. As it turned out lots of packages don't have any test suites at all and those which do don't always execute them automatically in %check. I've promised myself to improve this but still haven't had time to work on it. Hopefully by March I will have something in the works.

2014 - Fedora QA improvement

Last two months I've been working on some internal projects and looking a little bit into improving processes, test coverage and QA infrastructure - RHBZ #1064895. And Rawhide (upcoming Fedora 21) isn't behaving - RHBZ #1063245.

My goal for this year is to do more work on improving the overall test coverage of Fedora and together with the Fedora QA team bring an open testing infrastructure to the community.

Let's see how well that plays out!

What do I do now

During the last year I have gradually changed my responsibilities to work more on Fedora. As a volunteer in the Fedora QA I'm regularly testing installation of Rawhide trees and try to work closely with the community. I still have to manage RHEL 5 test cycles where I don't expect nothing disruptive at this stage in the product life-cycle!

I'm open to any ideas and help which can improve test coverage and quality of software in Fedora. If you're just joining the open source world this is an excellent opportunity to do some good, get noticed and even maybe get a job. I will definitely help you get through the process if you're willing to commit your time to this.

I hope this long post has been useful and fun to read. Please use the comments form to tell me if I'm missing something or you'd like to know more.

Looking forward to the next 7 years!

There are comments.

Tip: How to Build updates.img for Fedora

Anaconda the Fedora, CentOS and Red Hat Enterprise Linux installer has the capability to incorporate updates at runtime. These updates are generally distributed as an updates.img file. Here is how to easily build one from a working installation tree.

Instead of using the git sources to build an updates.img I prefer using the SRPM from the tree which I am installing. This way the resulting updates image will be more consistent with the anaconda version already available in the tree. And in theory everything you need to build it should already be available as well. UPDATE 2014-02-08: You can also build the updates.img from the git source tree which is shown at the bottom of this article.

The following steps work for me on a Fedora 20 system.

  • Download the source RPM for anaconda from the tree and extract the sources to a working directory. Then;

    cd anaconda-20.25.16-1
    git init
    git add .
    git commit -m "initial import"
    git tag anaconda-20.25.16-1
    
  • The above steps will create a local git repository and tag the initial contents before modification. The tag is required later by the script which creates the updates image;

  • After making your changes commit them and from the top anaconda directory execute:

    ./scripts/makeupdates -t anaconda-20.25.16-1
    

You can also add RPM contents to the updates.img but you need to download the packages first:

yumdownloader python-coverage python-setuptools

./scripts/makeupdates -t anaconda-20.25.16-1 -a ~/python-coverage-3.7-1.fc20.x86_64.rpm -a ~/python-setuptools-1.4.2-1.fc20.noarch.rpm 
BUILDDIR /home/atodorov/anaconda-20.25.16-1
Including anaconda
2 RPMs added manually:
python-setuptools-1.4.2-1.fc20.noarch.rpm
python-coverage-3.7-1.fc20.x86_64.rpm
cd /home/atodorov/anaconda-20.25.16-1/updates && rpm2cpio /home/atodorov/python-setuptools-1.4.2-1.fc20.noarch.rpm | cpio -dium
3534 blocks
cd /home/atodorov/anaconda-20.25.16-1/updates && rpm2cpio /home/atodorov/python-coverage-3.7-1.fc20.x86_64.rpm | cpio -dium
1214 blocks
<stdin> to <stdout> 4831 blocks

updates.img ready

In the above example I have only modified the top level anaconda file (/usr/sbin/anaconda inside the installation environment) experimenting with python-coverage integration.

You are done! Make the updates.img available to Anaconda and start using it!

UPDATE 2014-02-08: If you prefer working with the anaconda source tree here's how to do it:

git clone git://git.fedorahosted.org/git/anaconda.git
cd anaconda/
git checkout anaconda-20.25.16-1 -b my_feature-branch

... make changes ...

git commit -a -m "Fixed something"

./scripts/makeupdates -t anaconda-20.25.16-1

There are comments.

FOSDEM 2014 Report - Day #2 Testing and Automation

Testing and Automation

FOSDEM was hosting the Testing and automation devroom for the second year and this was the very reason I attended the conference. I managed to get in early and stayed until 15:00 when I had to leave to catch my flight (which was late :().

There were 3 talks given by Red Hat employees in the testing devroom which was a nice opportunity to meet some of the folks I've been working on IRC with. Unfortunately I didn't meet anyone from Fedora QA. Not sure if they were attending or not.

All the talks were interesting so see the official schedule and video for more details. I will highlight only the items I saw as particularly interesting or have not heard of before.

ANSTE

ANSTE - Advanced Network Service Testing Environment is a test infrastructure controller, something like our own Beaker but designed to create complex networking environments. I think it lacks many of the provisioning features built in Beaker and integration with various hypervisors and bare-metal provisioning. What it seems to do better (as far as I can tell from the talk) is to deploy virtual systems and create more complex network configuration between them. Not something I will need in the near future but definitely worth a look at.

cwrap

cwrap is...

a set of tools to create a fully isolated network environment to test client/server components on a single host. It provides synthetic account information, hostname resolution and support for privilege separation. The heart of cwrap consists of three libraries you can preload to any executable.

That one was the coolest technology I've seen so far although I may not need to use it at all, hmmm maybe testing DHCP fits the case.

It evolved from the Samba project and takes advantage of the order in which libraries are searched when resolving functions. When you preload the project libraries to any executable they will override standard libc functions for working with sockets, user accounts and privilege escalation.

The socket_wrapper library redirects networking sockets through local UNIX sockets and gives you the ability to test applications which need privileged ports with a local developer account.

The nss_wrapper library provides artificial information for user and group accounts, network name resolution using a hosts file and loading and testing of NSS modules.

The uid_wrapper library allows uid switching as a normal user (e.g. fake root) and supports user/group changing in the local thread using the syscalls (like glibc).

All of these wrapper libraries are controlled via environment variables and definitely makes testing of daemons and networking applications easier.

Testing Documentation

That one was just scratching the surface of an entire branch of testing which I've not even considered before. The talk also explains why it is hard to test documentation and what possible solutions there are.

If you write user guides and technical articles which need to stay current with the software this is definitely the place to start.

Automation in the Foreman Infrastructure

The last talk I've listened to. Definitely the best one from a general testing approach point of view. Greg talked about starting with Foreman unit tests, then testing the merged PR, then integration tests, then moving on to test the package build and then the resulting packages themselves.

These guys try to even test their own infrastructure (infra as code) and the test suites they use to test everything else. It's all about automation and the level of confidence you have in the entire process.

I like the fact that no single testing approach can make you confident enough before shipping the code and that they've taken into account changes which get introduced at various places (e.g. 3rd party package upgrades, distro specific issues, infrastructure changes and such)

If I had to attend only one session it would have been this one. There are many things for me to take back home and apply to my work on Fedora and RHEL.

If you find any of these topics remotely interesting I advise you to wait until FOSDEM video team uploads the recordings and watch the entire session stream. I'm definitely missing a lot of stuff which can't be easily reproduced in text form.

You can also find my report of the first FOSDEM'14 day on Saturday here.

There are comments.


Page 1 / 2