In my previous post I briefly talked
about running anaconda from a git checkout. My goal was to rewrite tests/gui/
so
that they don't use a LiveCD and virtual machines anymore. I'm pleased to announce
that this is already done (still not merged), see
PR#457.
The majority of the changes are just shuffling bits around and deleting unused code. The existing UI tests were mostly working and only needed minor changes. There are two things which didn't work and are temporarily disabled:
To play around with this make sure you have accessibility enabled and:
# cd anaconda/
# export top_srcdir=`pwd`
# setenforce 0
# cd tests/gui/
# ./run_gui_tests.sh
Note: you also need Dogtail for Python3 which isn't officially available yet. I'm building from https://vhumpa.fedorapeople.org/dogtail/beta/dogtail3-0.9.1-0.3.beta3.src.rpm
My future plans are to figure out how to re-enable what is temporarily
disabled, update run_gui_tests.sh
to properly start gnome-session and
enable accessibility, do a better job cleaning up after a failure,
enable coverage and hook everything into make ci
.
Happy testing!
There are comments.
It is now possible to execute anaconda directly from a git checkout.
Disclaimer: this is only for testing purposes, you are not supposed to
execute anaconda from git and install a running system! My intention is
to use this feature and rewrite the Dogtail tests inside tests/gui/
which
rely on having a LiveCD.iso and running VMs to execute. For me this has proven
very slow and difficult to debug problems in the past hence the change.
Note: you will need to have an active DISPLAY in your environment and also set SELinux to permissive, see rhbz#1276376.
Please see PR 438 for more details.
There are comments.
Yesterday I've added Krasimir Tsonev's blog to http://planet.sofiavalley.com and the planet broke. Suddenly it started showing only Krasi's articles and all of them with the same date. The problem was the RSS feed didn't have any timestamps. The fix is trivial:
--- rss.xml.orig 2015-11-13 10:12:35.348625718 +0200
+++ rss.xml 2015-11-13 10:12:45.157932304 +0200
@@ -9,120 +9,160 @@
<title><![CDATA[A modern React starter pack based on webpack]]></title>
<link>http://krasimirtsonev.com/blog/article/a-modern-react-starter-pack-based-on-webpack</link>
<description><![CDATA[<p><i>Checkout React webpack starter in <a href=\"https://github.com/krasimir/react-web<br /><p>You know how crazy is the JavaScript world nowadays. There are new frameworks, libraries and tools coming every day. Frequently I’m exploring some of these goodies. I got a week long holiday. I promised to myself that I’ll not code, read or watch about code. Well, it’s stronger than me. <a href=\"https://github.com/krasimir/react-webpack-starter\">React werbpack starter</a> is the result of my no-programming week.</p>]]></description>
+ <pubDate>Thu, 01 Oct 2015 00:00:00 +0300</pubDate>
+ <guid>http://krasimirtsonev.com/blog/article/a-modern-react-starter-pack-based-on-webpack</guid>
</item>
Thanks to Krasi for fixing this quickly and happy reading!
There are comments.
Anaconda, the Fedora and Red Hat Enterprise Linux installer, has gained some features to facilitate building Docker images. These are only available in kickstart. To build a Docker image for HTTPD, using packages provided in the distro use the following ks.cfg file:
install
lang en_US.UTF-8
keyboard us
network --onboot yes --device eth0 --bootproto dhcp
rootpw --lock
firewall --disabled
timezone Europe/Sofia
clearpart --all --initlabel
part / --fstype=ext4 --size=1 --grow
bootloader --disabled
%packages --nocore --instLangs=en_US --excludedocs
httpd
-kernel
yum-langpacks # workaround for rhbz#1271766
%end
The above kickstart file will:
--disabled
. The resulting image
will not be bootable--nocore
--instLangs
--excludedocs
Note: the previous --nobase
option is deprected and doesn't have any effect.
After the VM installation is complete grab the contents of the root directory:
# virt-tar-out -a /var/lib/libvirt/images/disk.qcow2 / myimage.tar
Import the tarball into Docker and inspect the result:
# docker import myimage.tar local_images:ver1.0
8a2324e6d0e940a998b990262335894a17d261450c33f57dc153d3d1987e4fc1
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
local_images ver1.0 8a2324e6d0e9 13 seconds ago 320.6 MB
registry.access.redhat.com/rhel latest 82ad5fa11820 6 weeks ago 158.3 MB
registry.access.redhat.com/rhscl_beta/httpd-24-rhel7 latest 55a8a150cf2d 9 weeks ago 201.1 MB
Run commands into a new container:
# docker run --name=bash_myimage -it 8a2324e6d0e9 /bin/bash
bash-4.2# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.2 Beta (Maipo)
bash-4.2# rpm -q httpd
httpd-2.4.6-40.el7.x86_64
bash-4.2# exit
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64f7ca6d5844 8a2324e6d0e9 "/bin/bash" 24 seconds ago Exited (0) 19 seconds ago bash_myimage
As you can see the resulting image is bigger than stock images provided by Red Hat. At this moment I don't know if this is the minimum package set which satisfies dependencies or anaconda adds a bit more on its own. The full package list is given below. There are some packages like device-mapper, dracut, e2fsprogs, iptables, kexec-tools, SELinux related, systemd and tzdata which look out of place. My guess is some of them are pulled in from the various kickstart commands and not really necessary. I will follow up with devel and see if the content can be stripped down even more.
For more information check out these docs:
Full package list:
acl-2.2.51-12.el7.x86_64
apr-1.4.8-3.el7.x86_64
apr-util-1.5.2-6.el7.x86_64
audit-libs-2.4.1-5.el7.x86_64
basesystem-10.0-7.el7.noarch
bash-4.2.46-19.el7.x86_64
bind-libs-lite-9.9.4-29.el7.x86_64
bind-license-9.9.4-29.el7.noarch
binutils-2.23.52.0.1-54.el7.x86_64
bzip2-libs-1.0.6-13.el7.x86_64
ca-certificates-2015.2.4-71.el7.noarch
chkconfig-1.3.61-5.el7.x86_64
chrony-2.1.1-1.el7.x86_64
coreutils-8.22-15.el7.x86_64
cpio-2.11-24.el7.x86_64
cracklib-2.9.0-11.el7.x86_64
cracklib-dicts-2.9.0-11.el7.x86_64
cryptsetup-libs-1.6.7-1.el7.x86_64
curl-7.29.0-25.el7.x86_64
cyrus-sasl-lib-2.1.26-19.2.el7.x86_64
dbus-1.6.12-13.el7.x86_64
dbus-glib-0.100-7.el7.x86_64
dbus-libs-1.6.12-13.el7.x86_64
dbus-python-1.1.1-9.el7.x86_64
device-mapper-1.02.107-5.el7.x86_64
device-mapper-libs-1.02.107-5.el7.x86_64
dhclient-4.2.5-42.el7.x86_64
dhcp-common-4.2.5-42.el7.x86_64
dhcp-libs-4.2.5-42.el7.x86_64
diffutils-3.3-4.el7.x86_64
dracut-033-358.el7.x86_64
dracut-network-033-358.el7.x86_64
e2fsprogs-1.42.9-7.el7.x86_64
e2fsprogs-libs-1.42.9-7.el7.x86_64
ebtables-2.0.10-13.el7.x86_64
elfutils-libelf-0.163-3.el7.x86_64
elfutils-libs-0.163-3.el7.x86_64
ethtool-3.15-2.el7.x86_64
expat-2.1.0-8.el7.x86_64
file-libs-5.11-31.el7.x86_64
filesystem-3.2-20.el7.x86_64
findutils-4.5.11-5.el7.x86_64
firewalld-0.3.9-14.el7.noarch
gawk-4.0.2-4.el7.x86_64
gdbm-1.10-8.el7.x86_64
glib2-2.42.2-5.el7.x86_64
glibc-2.17-105.el7.x86_64
glibc-common-2.17-105.el7.x86_64
gmp-6.0.0-11.el7.x86_64
gnupg2-2.0.22-3.el7.x86_64
gobject-introspection-1.42.0-1.el7.x86_64
gpgme-1.3.2-5.el7.x86_64
grep-2.20-2.el7.x86_64
gzip-1.5-8.el7.x86_64
hardlink-1.0-19.el7.x86_64
hostname-3.13-3.el7.x86_64
httpd-2.4.6-40.el7.x86_64
httpd-tools-2.4.6-40.el7.x86_64
info-5.1-4.el7.x86_64
initscripts-9.49.30-1.el7.x86_64
iproute-3.10.0-54.el7.x86_64
iptables-1.4.21-16.el7.x86_64
iputils-20121221-7.el7.x86_64
kexec-tools-2.0.7-37.el7.x86_64
keyutils-libs-1.5.8-3.el7.x86_64
kmod-20-5.el7.x86_64
kmod-libs-20-5.el7.x86_64
kpartx-0.4.9-85.el7.x86_64
krb5-libs-1.13.2-10.el7.x86_64
langtable-0.0.31-3.el7.noarch
langtable-data-0.0.31-3.el7.noarch
langtable-python-0.0.31-3.el7.noarch
libacl-2.2.51-12.el7.x86_64
libassuan-2.1.0-3.el7.x86_64
libattr-2.4.46-12.el7.x86_64
libblkid-2.23.2-26.el7.x86_64
libcap-2.22-8.el7.x86_64
libcap-ng-0.7.5-4.el7.x86_64
libcom_err-1.42.9-7.el7.x86_64
libcurl-7.29.0-25.el7.x86_64
libdb-5.3.21-19.el7.x86_64
libdb-utils-5.3.21-19.el7.x86_64
libedit-3.0-12.20121213cvs.el7.x86_64
libffi-3.0.13-16.el7.x86_64
libgcc-4.8.5-4.el7.x86_64
libgcrypt-1.5.3-12.el7_1.1.x86_64
libgpg-error-1.12-3.el7.x86_64
libidn-1.28-4.el7.x86_64
libmnl-1.0.3-7.el7.x86_64
libmount-2.23.2-26.el7.x86_64
libnetfilter_conntrack-1.0.4-2.el7.x86_64
libnfnetlink-1.0.1-4.el7.x86_64
libpwquality-1.2.3-4.el7.x86_64
libselinux-2.2.2-6.el7.x86_64
libselinux-python-2.2.2-6.el7.x86_64
libsemanage-2.1.10-18.el7.x86_64
libsepol-2.1.9-3.el7.x86_64
libss-1.42.9-7.el7.x86_64
libssh2-1.4.3-10.el7.x86_64
libstdc++-4.8.5-4.el7.x86_64
libtasn1-3.8-2.el7.x86_64
libuser-0.60-7.el7_1.x86_64
libutempter-1.1.6-4.el7.x86_64
libuuid-2.23.2-26.el7.x86_64
libverto-0.2.5-4.el7.x86_64
libxml2-2.9.1-5.el7_1.2.x86_64
lua-5.1.4-14.el7.x86_64
lzo-2.06-8.el7.x86_64
mailcap-2.1.41-2.el7.noarch
ncurses-5.9-13.20130511.el7.x86_64
ncurses-base-5.9-13.20130511.el7.noarch
ncurses-libs-5.9-13.20130511.el7.x86_64
nspr-4.10.8-1.el7_1.x86_64
nss-3.19.1-17.el7.x86_64
nss-softokn-3.16.2.3-13.el7_1.x86_64
nss-softokn-freebl-3.16.2.3-13.el7_1.x86_64
nss-sysinit-3.19.1-17.el7.x86_64
nss-tools-3.19.1-17.el7.x86_64
nss-util-3.19.1-3.el7_1.x86_64
openldap-2.4.40-8.el7.x86_64
openssl-libs-1.0.1e-42.el7_1.9.x86_64
p11-kit-0.20.7-3.el7.x86_64
p11-kit-trust-0.20.7-3.el7.x86_64
pam-1.1.8-12.el7_1.1.x86_64
pcre-8.32-15.el7.x86_64
pinentry-0.8.1-14.el7.x86_64
pkgconfig-0.27.1-4.el7.x86_64
popt-1.13-16.el7.x86_64
procps-ng-3.3.10-3.el7.x86_64
pth-2.0.7-23.el7.x86_64
pygobject3-base-3.14.0-3.el7.x86_64
pygpgme-0.3-9.el7.x86_64
pyliblzma-0.5.3-11.el7.x86_64
python-2.7.5-34.el7.x86_64
python-decorator-3.4.0-3.el7.noarch
python-iniparse-0.4-9.el7.noarch
python-libs-2.7.5-34.el7.x86_64
python-pycurl-7.19.0-17.el7.x86_64
python-slip-0.4.0-2.el7.noarch
python-slip-dbus-0.4.0-2.el7.noarch
python-urlgrabber-3.10-7.el7.noarch
pyxattr-0.5.1-5.el7.x86_64
qrencode-libs-3.4.1-3.el7.x86_64
readline-6.2-9.el7.x86_64
redhat-logos-70.0.3-4.el7.noarch
redhat-release-server-7.2-7.el7.x86_64
rpm-4.11.3-17.el7.x86_64
rpm-build-libs-4.11.3-17.el7.x86_64
rpm-libs-4.11.3-17.el7.x86_64
rpm-python-4.11.3-17.el7.x86_64
sed-4.2.2-5.el7.x86_64
setup-2.8.71-6.el7.noarch
shadow-utils-4.1.5.1-18.el7.x86_64
shared-mime-info-1.1-9.el7.x86_64
snappy-1.1.0-3.el7.x86_64
sqlite-3.7.17-8.el7.x86_64
systemd-219-19.el7.x86_64
systemd-libs-219-19.el7.x86_64
sysvinit-tools-2.88-14.dsf.el7.x86_64
tzdata-2015g-1.el7.noarch
ustr-1.0.4-16.el7.x86_64
util-linux-2.23.2-26.el7.x86_64
xz-5.1.2-12alpha.el7.x86_64
xz-libs-5.1.2-12alpha.el7.x86_64
yum-3.4.3-132.el7.noarch
yum-langpacks-0.4.2-4.el7.noarch
yum-metadata-parser-1.1.4-10.el7.x86_64
zlib-1.2.7-15.el7.x86_64
There are comments.
In my previous post I've talked about testing anaconda and friends and raised some questions. Today I'm going to give an example of how to answer one of them: "How different is the code execution path between different tests?"
I'm going to use coverage-tools in my explanations below so a little introduction is required. All the tools are executable Python scripts which build on top of existing coverage.py API. The difference is mainly in flexibility of parameters and output formatting. I've tried to keep as close as possible to the existing behavior of coverage.py.
coverage-annotate - when given a .coverage data file prints the source code annotated with line numbers and execution markers.
!!! missing/usr/lib64/python2.7/site-packages/pyanaconda/anaconda_argparse.py
>>> covered/usr/lib64/python2.7/site-packages/pyanaconda/anaconda_argparse.py
... skip ...
37 > import logging
38 > log = logging.getLogger("anaconda")
39
40 # Help text formatting constants
41
42 > LEFT_PADDING = 8 # the help text will start after 8 spaces
43 > RIGHT_PADDING = 8 # there will be 8 spaces left on the right
44 > DEFAULT_HELP_WIDTH = 80
45
46 > def get_help_width():
47 > """
48 > Try to detect the terminal window width size and use it to
49 > compute optimal help text width. If it can't be detected
50 > a default values is returned.
51
52 > :returns: optimal help text width in number of characters
53 > :rtype: int
54 > """
55 # don't do terminal size detection on s390, it is not supported
56 # by its arcane TTY system and only results in cryptic error messages
57 # ending on the standard output
58 # (we do the s390 detection here directly to avoid
59 # the delay caused by importing the Blivet module
60 # just for this single call)
61 > is_s390 = os.uname()[4].startswith('s390')
62 > if is_s390:
63 ! return DEFAULT_HELP_WIDTH
64
... skip ...
In the example above all lines starting with > were executed by the interpreter. All top-level import statements were executed as you would expect. Then the method get_help_width() was executed (called from somewhere). Because this was on x86_64 machine line 63 was not executed. It is marked with !. The comments and empty lines are of no interest.
coverage-diff - produces git like diff reports on the text output of annotate.
--- a/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/source.py
+++ b/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/source.py
@@ -634,7 +634,7 @@
634 # Wait to make sure the other threads are done before sending ready, otherwise
635 # the spoke may not get be sensitive by _handleCompleteness in the hub.
636 > while not self.ready:
- 637 ! time.sleep(1)
+ 637 > time.sleep(1)
638 > hubQ.send_ready(self.__class__.__name__, False)
639
640 > def refresh(self):\
In this example line 637 was not executed in the first test run, while it was executed in the second test run. Reading the comments above it is clear the difference between the two test runs is just timing and synchronization.
How different is the code execution path between different tests? Looking at Fedora 23 test results we see several tests which differ only slightly in their setup - installation via HTTP, FTP or NFS; installation to SATA, SCSI, SAS drives; installation using RAID for the root file system; These are good candidates for further analysis.
Note: my results below are not from Fedora 23 but the conclusions still apply! The tests were executed on bare metal and virtual machines, trying to use the same hardware or same systems configurations where possible!
Example: HTTP vs. FTP
--- a/usr/lib64/python2.7/site-packages/pyanaconda/packaging/__init__.py
+++ b/usr/lib64/python2.7/site-packages/pyanaconda/packaging/__init__.py
@@ -891,7 +891,7 @@
891
892 # Run any listeners for the new state
893 > for func in self._event_listeners[event_id]:
- 894 ! func()
+ 894 > func()
895
896 > def _runThread(self, storage, ksdata, payload, fallback, checkmount):
897 # This is the thread entry
--- a/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/lib/resize.py
+++ b/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/lib/resize.py
@@ -102,10 +102,10 @@
102 # Otherwise, fall back on increasingly vague information.
103 > if not part.isleaf:
104 > return self.storage.devicetree.getChildren(part)[0].name
- 105 > if getattr(part.format, "label", None):
+ 105 ! if getattr(part.format, "label", None):
106 ! return part.format.label
- 107 > elif getattr(part.format, "name", None):
- 108 > return part.format.name
+ 107 ! elif getattr(part.format, "name", None):
+ 108 ! return part.format.name
109 ! else:
110 ! return ""
111
@@ -315,10 +315,10 @@
315 > def on_key_pressed(self, window, event, *args):
316 # Handle any keyboard events. Right now this is just delete for
317 # removing a partition, but it could include more later.
- 318 > if not event or event and event.type != Gdk.EventType.KEY_RELEASE:
+ 318 ! if not event or event and event.type != Gdk.EventType.KEY_RELEASE:
319 ! return
320
- 321 > if event.keyval == Gdk.KEY_Delete and self._deleteButton.get_sensitive():
+ 321 ! if event.keyval == Gdk.KEY_Delete and self._deleteButton.get_sensitive():
322 ! self._deleteButton.emit("clicked")
323
324 > def _sumReclaimableSpace(self, model, path, itr, *args):
--- a/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/source.py
+++ b/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/source.py
@@ -634,7 +634,7 @@
634 # Wait to make sure the other threads are done before sending ready, otherwise
635 # the spoke may not get be sensitive by _handleCompleteness in the hub.
636 > while not self.ready:
- 637 ! time.sleep(1)
+ 637 > time.sleep(1)
638 > hubQ.send_ready(self.__class__.__name__, False)
639
640 > def refresh(self):
The difference in source.py
is from timing/synchronization and can safely be ignored.
I'm not exactly sure about __init__.py
but doesn't look much of a big deal.
We're left with resize.py
. The differences in on_key_pressed() are because
I've probably used the keyboard instead the mouse (these are indeed manual installs).
The other difference is in how the partition labels are displayed. One of the installs
was probably using fresh disks while the other not.
Example: SATA vs. SCSI - no difference
Example: SATA vs. SAS (mpt2sas driver)
--- a/usr/lib64/python2.7/site-packages/pyanaconda/bootloader.py
+++ b/usr/lib64/python2.7/site-packages/pyanaconda/bootloader.py
@@ -109,10 +109,10 @@
109 > try:
110 > opts.parity = arg[idx+0]
111 > opts.word = arg[idx+1]
- 112 ! opts.flow = arg[idx+2]
- 113 ! except IndexError:
- 114 > pass
- 115 > return opts
+ 112 > opts.flow = arg[idx+2]
+ 113 > except IndexError:
+ 114 ! pass
+ 115 ! return opts
116
117 ! def _is_on_iscsi(device):
118 ! """Tells whether a given device is on an iSCSI disk or not."""
@@ -1075,13 +1075,13 @@
1075 > command = ["serial"]
1076 > s = parse_serial_opt(self.console_options)
1077 > if unit and unit != '0':
- 1078 ! command.append("--unit=%s" % unit)
+ 1078 > command.append("--unit=%s" % unit)
1079 > if s.speed and s.speed != '9600':
1080 > command.append("--speed=%s" % s.speed)
1081 > if s.parity:
- 1082 ! if s.parity == 'o':
+ 1082 > if s.parity == 'o':
1083 ! command.append("--parity=odd")
- 1084 ! elif s.parity == 'e':
+ 1084 > elif s.parity == 'e':
1085 ! command.append("--parity=even")
1086 > if s.word and s.word != '8':
1087 ! command.append("--word=%s" % s.word)
As you can see the difference is minimal, mostly related to the underlying hardware. As far as I can tell this has to do with how the bootloader is installed on disk but I'm no expert on this particular piece of code. I've seen the same difference in other comparisons so it probably has to do more with hardware than with what kind of disk/driver is used.
Example: RAID 0 vs. RAID 1 - manual install
--- a/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/datetime_spoke.py
+++ b/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/datetime_spoke.py
@@ -490,9 +490,9 @@
490
491 > time_init_thread = threadMgr.get(constants.THREAD_TIME_INIT)
492 > if time_init_thread is not None:
- 493 > hubQ.send_message(self.__class__.__name__,
- 494 > _("Restoring hardware time..."))
- 495 > threadMgr.wait(constants.THREAD_TIME_INIT)
+ 493 ! hubQ.send_message(self.__class__.__name__,
+ 494 ! _("Restoring hardware time..."))
+ 495 ! threadMgr.wait(constants.THREAD_TIME_INIT)
496
497 > hubQ.send_ready(self.__class__.__name__, False)
498
As far as I can tell the difference is related to hardware clock settings, probably due to different defaults in BIOS on the various hardware. Additional tests with RAID 5 and RAID 6 reveals the same exact difference. RAID 0 vs. RAID 10 shows no difference at all. Indeed as far as I know anaconda delegates the creation of RAID arrays to mdadm once the desired configuration is known so these results are to be expected.
As you can see sometimes there are tests which appear to be very important
but in reality they cover a corner case of the base test. For example if any
of the RAID levels works we can be pretty confident
all of them work they won't break in anaconda
(thanks Adam Williamson)!
What you do with this information is up to you. Sometimes QA is able to execute all the tests and life is good. Sometimes we have to compromise, skip some testing and accept the risks of doing so. Sometimes you can execute all tests for every build, sometimes only once per milestone. Whatever the case having the information to back up your decision is vital!
In my next post on this topic I'm going to talk more about functional tests vs. unit tests. Both anaconda and blivet have both kinds of tests and I'm interested to know if tests from the two categories focus on the same functionality how are they different. If we have a unit test for feature X, does it warrant to spend the resources doing functional testing for X as well?
There are comments.
My previous post was an introduction to testing installation related components. Now I'm going to talk more about anaconda and how it is tested.
There are two primary ways to test anaconda. You can execute make check
in the
source directory which will trigger the package test suite. The other possibility
is to perform an actual installation, on bare meta or virtual machine, using the
latest Rawhide snapshots which also
include the latest anaconda. For both of these methods we can collect code
coverage information. In live installation mode coverage is enabled via the
inst.debug
boot argument. Fedora 23 and earlier use debug=1
but that
can lead to problems
sometimes.
Kickstart
is a method of automating the installation of Fedora by supplying the necessary
configuration into a text file and pointing the installer at this file. There is
the directory tests/kickstart_tests
, inside the anaconda source, where each
test is a kickstart file and a shell script. The test runner provisions a virtual
machine using boot.iso and the kickstart file. A shell script then verifies
installation was as expected and copies files of interest to the host system.
Kickstart files are also the basis for testing Fedora installations in
Beaker.
Naturally some of these in-package kickstart tests are the same as out-of-band kickstart tests. Hint: there are more available but not yet public.
The question which I don't have an answer for right now is "Can we remove some of the duplicates and how this affects devel and QE teams" ? The pros of in-package testing are that it is faster compared to Beaker. The cons are that you're not testing the real distro (every snapshot is a possible final release to the users).
Dogtail uses accessibility technologies to communicate with desktop applications. It is written in Python and can be used as GUI test automation framework. Long time ago I've proposed support for Dogtail in anaconda which was rejected, then couple of years later it was accepted and later removed from the code again.
Anaconda has in-package Dogtail tests (tests/gui/
). They work by attaching
a second disk image with the test suite to a VM running a LiveCD. Anaconda is
started on the LiveCD and an attempt to install Fedora on disk 1 is made.
Everything is driven by the Dogtail scripts. There are only a few of these
tests available and they are currently disabled.
Red Hat QE has also created another method for running Dogtail tests in anaconda
using an updates.img with the previous functionality.
Even if there are some duplicate tests I'm not convinced we have to drop the
tests/gui/
directory from the code because
the framework used to drive the graphical interface of anaconda appears to be very
well written. The code is clean and easy to follow.
Also I don't have metrics of how much these two methods differ or how much they cover
in their testing. IMO they are pretty close and before we can find a way to
reliably execute them on a regular basis there isn't much to be done here.
One idea is to use the --dirinstall
or --image
options and skip the
LiveCD part entirely.
make ci
covers 10% of the entire code base for anaconda. Mind you that
tests/storage
and tests/gui
are currently disabled.
See PR #346,
PR #327 and
PR #319!
There is definitely room for improvement.
On the other hand live installation testing is much better. Text mode covers around 25% while graphical installations around 40%. Text and graphical combined cover 50% though. These numbers will drop quite a bit once anaconda learns to include all possible files in its report but it is a good estimate.
The important questions to ask here are:
In my next post I will talk more about these questions and some rudimentary analysis against coverage data from the various test methods and test cases!
There are comments.
Since early 2015 I've been working on testing installation related components in Rawhide. I'm interested in the code produced by the Red Hat Installer Engineering Team and in particular in anaconda, blivet, pyparted and pykickstart. The goal of this effort is to improve the overall testing of these components and also have Red Hat QE contribute some of our knowledge back to the community. The benefit of course will be better software for everyone. In the next several posts I'll summarize what has been done so far and what's to be expected in the future.
Do you want others to contribute tests? I certainly do! When I started looking at the code it was obviously clear there was no documentation related to testing. Everyone needs to know how to write and execute these tests! Currently we have basic README files describing how to install necessary dependencies for development and test execution, how to execute the tests (and what can be tested) and most importantly what is the test architecture. There is description of how the file structure is organized and which are the base classes to inherit from when adding new tests. Most of the times each component goes through a pylint check and a standard PyUnit test suite.
Test documentation is usually in a tests/README
file. For example:
I've tried to explain as much as possible without bloating the files and going into unnecessary details. If you spot something missing please send a pull request.
This has been largely an effort driven by Chris Lumens from the devel team.
All the components I'm interested in are tested regularly in a CI environment.
There is a make ci
Makefile target for those of you interested in what exactly
gets executed.
In order to improve something you need to know where you stand. We'll I didn't. That's why the first step was to integrate the coverage.py tool with all of these components.
With the exception of blivet (written in C) all of the other components integrate well with coverage.py and produce good statistics. pykickstart is the champ here with 90% coverage, while anaconda is somewhere between 10% and 50%. Full test coverage measurement for anaconda isn't straight forward and will be the subject of my next post. For the C based code we have to hook up with Gcov which shouldn't be too difficult.
At the moment there are several open pull requests to integrate the coverage test
targets with make ci
and also report the results in human readable form. I will be
collecting these for historical references.
I've created some basic text-mode coverage-tools to help me combine and compare data from different executions. These are only the start of it and I'm expanding them as my needs for reporting and analytics evolve. I'm also looking into more detailed coverage reports but I don't have enough data and use cases to work on this front at the moment.
Some ideas currently in mind:
coverage.py is a very nice tool indeed but I guess most people use it in a very limited way. Shortly after I started working with it I've found several places which need improvements. These have to do with combining and reporting on multiple files.
Some of the interesting issues I've found and still open are:
In my next post I will talk about anaconda code coverage and what I want to do with it. In the mean time please use the comments to share your feedback.
There are comments.
I've previously written about my Thunderbolt to Ethernet adapter working on Linux despite claims that it should not. Recently I've used my MacBook Air to do a presentation and the Thunderbolt to VGA adapter worked well enough.
It was an Acer adapter but I have no more details b/c it wasn't mine.
Before the event I've tested it and it worked so on the day of the event I've freshly rebooted my laptop to be sure no crashed processes or anything like that was running and gave it a go.
First time I plugged in the MacBook everything worked like a charm. Then my computer was unplugged and the lid closed, causing it to suspend. The second time I've plugged it in I was told there was nothing showing on the projector so I quickly plugged the adapter out and then back in. It worked more or less.
At the time I had LibreOffice Impress in presentation mode but I did see ABRT detecting a kernel problem. When my slides popped up the text on the first one was mostly missing but the rest were ok!
Mind you I'm still running RHEL 7 on my MacBook Air. The above is with kernel-3.10.0-229.14.1.el7.x86_64.
There are comments.
In software testing, usually unit testing, test stubs are programs that simulate the behaviors of external dependencies that a module undergoing the test depends on. Test stubs provide canned answers to calls made during the test.
I've discovered an improperly written stub method in one of DNF's tests:
class DownloadCommandTest(unittest.TestCase):
def setUp(self):
def stub_fn(pkg_spec):
if '.src.rpm' in pkg_spec:
return Query.filter(sourcerpm=pkg_spec)
else:
q = Query.latest()
return [pkg for pkg in q if pkg_spec == pkg.name]
cli = mock.MagicMock()
self.cmd = download.DownloadCommand(cli)
self.cmd.cli.base.repos = dnf.repodict.RepoDict()
self.cmd._get_query = stub_fn
self.cmd._get_query_source = stub_fn
The replaced methods look like this:
def _get_query(self, pkg_spec):
"""Return a query to match a pkg_spec."""
subj = dnf.subject.Subject(pkg_spec)
q = subj.get_best_query(self.base.sack)
q = q.available()
q = q.latest()
if len(q.run()) == 0:
msg = _("No package " + pkg_spec + " available.")
raise dnf.exceptions.PackageNotFoundError(msg)
return q
def _get_query_source(self, pkg_spec):
""""Return a query to match a source rpm file name."""
pkg_spec = pkg_spec[:-4] # skip the .rpm
nevra = hawkey.split_nevra(pkg_spec)
q = self.base.sack.query()
q = q.available()
q = q.latest()
q = q.filter(name=nevra.name, version=nevra.version,
release=nevra.release, arch=nevra.arch)
if len(q.run()) == 0:
msg = _("No package " + pkg_spec + " available.")
raise dnf.exceptions.PackageNotFoundError(msg)
return q
As seen here stub_fn replaces the _get_query methods from the class under test. At the time of writing this has probably seemed like a good idea to speed up writing the tests.
The trouble is we should be replacing the external dependencies of _get_query (other parts of DNF essentially) and not methods from DownloadCommand. To understand why this is a bad idea check PR #113, which directly modifies _get_query. There's no way to test this patch with the current state of the test.
So I took a few days to experiment and update the current test stubs. The result is PR #118. The important bits are the SackStub and SubjectStub classes which hold information about the available RPM packages on the system. The rest are cosmetics to fit around the way the query objects are used (q.available(), q.latest(), q.filter()). The proposed design correctly overrides the external dependencies on dnf.subject.Subject and self.base.sack which are initialized before our plugin is loaded by DNF.
I must say this is the first error of this kind I've seen in my QA practice so far. I have no idea if this was a minor oversight or something which happens more frequently in open source projects but it's a great example nevertheless.
For those of you who'd like to get started on unit testing I can recommend the book The Art of Unit Testing: With Examples in .Net by Roy Osherove!
UPDATE: Part 2 with more practical examples can be found here.
There are comments.
In the last week I've been trying to figure out how many packages conform to the new Harden All Packages policy in Fedora!
From 46884 RPMs, 17385 are 'x86_64' meaning they may contain ELF objects.
From them 4489 are reported as failed checksec
.
What you should see as the output from checksec is
Full RELRO Canary found NX enabled PIE enabled No RPATH No RUNPATH
Full RELRO Canary found NX enabled DSO No RPATH No RUNPATH
The first line is for binaries, the second one for libraries b/c
DSOs on x86_64 are always position-independent. Some RPATHs are acceptable,
e.g. %{_libdir}/foo/
and I've tried to exclude them unless
other offenses are found. The script which does this is
checksec-collect.
Most often I'm seeing Partial RELRO, No canary found and No PIE errors. Since all packages potentially process untrusted input, it makes sense for all of them to be hardened and enhance the security of Fedora. That's why all of these errors should be considered valid bugs.
Please see if your package is in the list and try to fix it or let me know why it should be excluded, for example it's a boot loader and doesn't function properly with hardening enabled. The full list is available at GitHub.
For more information about the different protection mechanisms see the following links:
UPDATE 2015-09-17
I've posted my findings on fedora-devel and the comments are more than interesting even revealing an old bug in libtool.
There are comments.
When editing the grub2 menu (especially in EFI mode) it tells you to press Ctrl-x to save your changes and continue the boot process. However this doesn't work on my MacBook Air, see rhbz#1253637, and maybe some other platforms. If this is the case try pressing F10 instead. It works for me!
There are comments.
If you are working with Python and writing unit tests chances are you are familiar with the coverage reporting tool. However there are testing scenarios in which we either don't use unit tests or maybe execute different code paths(test cases) independent of each other.
For example, this is the case with installation testing in Fedora. Because anaconda - the installer is very complex the easiest way is to test it live, not with unit tests. Even though we can get a coverage report (anaconda is written in Python) it reflects only the test case it was collected from.
coverage combine
can be used to combine several data files and produce an aggregate
report. This can tell you how much test coverage you have across all your tests.
As far as I can tell Python's coverage doesn't tell you how many times a particular line of code has been executed. It also doesn't tell you which test cases executed a particular line (see PR #59). In the Fedora example, I have the feeling many of our tests are touching the same code base and not contributing that much to the overall test coverage. So I started working on these items.
I imagine a script which will read coverage data from several test executions (preferably in JSON format, PR #60) and produce a graphical report similar to what GitHub does for your commit activity.
See an example here!
The example uses darker colors to indicate more line executions, lighter for less executions. Check the HTML for the actual numbers b/c there are no hints yet. The input JSON files are here and the script to generate the above HTML is at GitHub.
Now I need your ideas and comments!
What kinds of coverage reports are you using in your job ? How do you generate them ? How do they look like ?
There are comments.
It's been a busy week after DEVit conf took place in Thessaloniki. Here are my impressions.
I've started the day with the session called "Crack, Train, Fix, Release" by Chris Heilmann. While it was very interesting for some unknown reason I was expecting a talk more closely related to software testing. Unfortunately at the same time in the other room was a talk called "Integration Testing from the Trenches" by Nicolas Frankel which I missed.
At the end Chris answered the question "What to do about old versions of IE ?". And the answer pretty much was "Don't try to support everything, leave them with basic functionality so that users can achieve what they came for on your website. Don't put nice buttons b/c IE 6 users are not used to nice things and they get confused."
If you remember I had a similar question to Jeremy Keith at Bulgaria Web Summit last month and the answer was similar:
Q: Which one is Jeremy's favorite device/browser to develop for.
A: Your approach is wrong and instead we should be thinking in terms of what features are essential or non-essential for our websites and develop around features (if supported, if not supported) not around browsers!
Btw I did ask Chris if he knows Jeremy and he does.
After the coffee break there was "JavaScript ♥ Unicode" by Mathias Bynens which I saw last year at How Camp in Veliko Tarnovo so I just stopped by to say hi and went to listen to "The future of responsive web design: web component queries" by Nikos Zinas. As far as I understood Nikos is a local rock-star developer. I'm not much into web development but the opportunity to create your own HTML components (tags) looks very promising. I guess there will be more business coming for Telerik :).
I wanted to listen to "Live Productive Coder" by Heinz Kabutz but that one started in Greek so I switched the room for "iOS real time content modifications using websockets" by Benny Weingarten-Gabbay.
After lunch I went straight for "Introduction to Docker: What is it and why should I care?" by Ian Miell which IMO was the most interesting talk of the day. It wasn't very technical but managed to clear some of the mysticism around Docker and what it actually is. I tried to grab a few minutes of Ian's time and we found topics of common interest to talk about (Project Atomic anyone?) but later failed to find him and continue the talk. I guess I'll have to follow online.
Tim Perry with "Your Web Stack Would Betray You In An Instant" made a great show. The room was packed, I myself was actually standing the whole time. He described a series of failures across the entire web development stack which gave developers hard times patching and upgrading their services. The lesson: everything fails, be prepared!
The last talk I visited was "GitHub Automation" by Forbes Lindesay. It was more of an inspirational talk, rather than technical one. GitHub provides cool API so why not use it?
From what I know this is the first year of DEVit. For a first timer the team did great! I particularly liked the two coffee breaks before lunch and in the early afternoon and the sponsors pitches in between the main talks.
All talks were recorded but I have no idea what's happening with the videos!
I will definitely make a point of visiting Thessaloniki more often and follow the local IT and start-up scenes there. And tonight is Silicon Drinkabout which will be the official after party of DigitalK in Sofia.
There are comments.
There's a huge list of free books on the topic of software testing. This will definitely be my summer reading list. I hope you find it helpful.
The guys from QAHelp have compiled a list of 200 graduation theses from various universities which are freely accessible online. The list can be found here.
There are comments.
Recently I've been looking into
fixing tilde and Fn keys mapping for MacBook Air
and thought I could use sysctl
to permanently set the desired values. Unfortunately this is not
possible. sysctl
can only write under /proc/sys
and this is
hard-coded in the source:
static const char PROC_PATH[] = "/proc/sys/";
IMO this is relatively easy to patch and allow sysctl to read/write values under /sys. The only open question I see is backward compatibility - maybe adding new parameter (e.g. --sysfs) or adding extended sytax e.g. if variable name starts with / then treat it as absolute path.
I've asked sysctl maintainers on the procps mailing list but so far got no answer.
Is anyone else interested in this? How do you set parameter values under /sys then ?
NOTE: for my particular purposes I could have used config files under /etc/modprobe.d/ or a startup script (I used that) instead.
There are comments.
A quick solution for MacBook Air users running Linux who want to use external projector is to use a USB to VGA adapter. Mine is Plugable UGA-165 and it works great with Red Hat Enterprise Linux 7.1.
After the device is plugged in the udl kernel module is loaded and a new framebuffer device is created (/dev/fb1 in my case). Using mate-display-properties I'm able to configure the 2nd monitor attached to the USB video card. I was able to succeffully display an OpenOffice presentation on the 2nd monitor and play YouTube video.
All USB 2.0 devices from Plugable should be well supported on Linux. For USB 3.0 David Airlie from Red Hat is doing some reverse engineering but I have no idea what the status is. For more info see:
There are comments.
As it seems my Thunderbolt to gigabit Ethernet adapter works with RHEL 7.1 on a MacBook Air despite some reports it may not.
After plugging the device is automatically recognized and the tg3 driver is loaded. Detailed lspci below:
0a:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM57762 Gigabit Ethernet PCIe
Subsystem: Apple Inc. Device 00f6
Physical Slot: 9
Flags: bus master, fast devsel, latency 0, IRQ 19
Memory at cd800000 (64-bit, prefetchable) [size=64K]
Memory at cd810000 (64-bit, prefetchable) [size=64K]
[virtual] Expansion ROM at cd820000 [disabled] [size=64K]
Capabilities: [48] Power Management version 3
Capabilities: [50] Vital Product Data
Capabilities: [58] MSI: Enable- Count=1/8 Maskable- 64bit+
Capabilities: [a0] MSI-X: Enable+ Count=6 Masked-
Capabilities: [ac] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [13c] Device Serial Number 00-00-ac-87-a3-25-20-33
Capabilities: [150] Power Budgeting <?>
Capabilities: [160] Virtual Channel
Capabilities: [1b0] Latency Tolerance Reporting
Kernel driver in use: tg3
Unplugging and pluggin back in the network cable works as expected. I did see my computer freeze 2 out of 10 times when I've unplugged the Thunderbolt adapter but couldn't reproduce it repliably or grab more info.
For the record this is with kernel 3.10.0-229.1.2.el7.x86_64 which is missing this upstream commit. I'm not sure why it works though.
If I remember correctly tg3 is available during installation so you should be able to use the Thunderbolt adapter instead of WiFi as well.
There are comments.
One of the best SIP clients for Linux is Twinkle. However upstream is not active (or even maybe dead) and the package is missing from latest Fedora releases and fails to build on RHEL 7.
First you need to build and install a few dependencies in the following order: ucommon, ccrtp, libzrtpcpp. You will also need EPEL 7 enabled to satisfy build dependencies.
Then apply the following patch to the original twinkle.spec
--- twinkle.spec.orig 2015-05-01 14:07:01.870710147 +0300
+++ twinkle.spec 2015-05-01 15:07:28.734734573 +0300
@@ -47,6 +47,8 @@
%build
export LDFLAGS=-lkio
+export CPPFLAGS="$CPPFLAGS -I/usr/include/libzrtpcpp/"
+%__autoconf
%configure
make %{?_smp_mflags}
The package now builds, installs and runs successfully on RHEL 7. The compiled packages and dependencies are available in my Macbook Air RHEL 7 repository.
There are comments.
Thera are two problems with the MacBook Air keyboard on Linux:
Function keys and media keys are switched and by default you have to press Fn+F5 in order to refresh a browser page. The solution is
echo 2 > /sys/module/hid_apple/parameters/fnmode
The tilde key is mapped improperly, see RHBZ #1025041. To fix it
echo 0 > /sys/module/hid_apple/parameters/iso_layout
Either you have to add the above commands in a boot script or you can
yum install mba-kbd-fix
from my
Macbook Air RHEL 7 repository.
The RPM source can be found here.
There are comments.
I've made a repository with binary (x86_64 only) and source RPM packages which are missing from Red Hat Enterprise Linux 7 and necessary when using a MacBook Air. To install execute the commands below:
cd /etc/yum.repos.d/
wget https://s3.amazonaws.com/atodorov/rpms/macbook/el7/rhel7-macbook.repo
yum install kmod-wl
yum install kmod-mba6x_bl
And uncomment /etc/X11/xorg.conf.d/98-mba_bl.conf
.
Note: the .spec file is available from RP #26.
yum install mba-kbd-fix
There are comments.