Anaconda, the Fedora and Red Hat Enterprise Linux installer, has gained some
features to facilitate building Docker images. These are only available
in kickstart. To build a Docker image for HTTPD, using packages provided in the
distro use the following ks.cfg file:
install
lang en_US.UTF-8
keyboard us
network --onboot yes --device eth0 --bootproto dhcp
rootpw --lock
firewall --disabled
timezone Europe/Sofia
clearpart --all --initlabel
part / --fstype=ext4 --size=1 --grow
bootloader --disabled
%packages --nocore --instLangs=en_US --excludedocs
httpd
-kernel
yum-langpacks # workaround for rhbz#1271766
%end
The above kickstart file will:
install HTTPD and its dependencies
disable kernel installation by excluding it from the package list
disable installation of the boot loader using --disabled. The resulting image
will not be bootable
disable firewall
locks the root account so it can’t login from the console
prevent installing @core using --nocore
limit the installation of localization files using --instLangs
limit the installation of documentation using --excludedocs
Note: the previous --nobase option is deprected and doesn’t have any effect.
After the VM installation is complete grab the contents of the root directory:
# virt-tar-out -a /var/lib/libvirt/images/disk.qcow2 / myimage.tar
Import the tarball into Docker and inspect the result:
# docker import myimage.tar local_images:ver1.0
8a2324e6d0e940a998b990262335894a17d261450c33f57dc153d3d1987e4fc1
# docker images
REPOSITORY TAG IMAGE ID CREATED VIRTUAL SIZE
local_images ver1.0 8a2324e6d0e9 13 seconds ago 320.6 MB
registry.access.redhat.com/rhel latest 82ad5fa11820 6 weeks ago 158.3 MB
registry.access.redhat.com/rhscl_beta/httpd-24-rhel7 latest 55a8a150cf2d 9 weeks ago 201.1 MB
Run commands into a new container:
# docker run --name=bash_myimage -it 8a2324e6d0e9 /bin/bash
bash-4.2# cat /etc/redhat-release
Red Hat Enterprise Linux Server release 7.2 Beta (Maipo)
bash-4.2# rpm -q httpd
httpd-2.4.6-40.el7.x86_64
bash-4.2# exit
# docker ps -a
CONTAINER ID IMAGE COMMAND CREATED STATUS PORTS NAMES
64f7ca6d5844 8a2324e6d0e9 "/bin/bash" 24 seconds ago Exited (0) 19 seconds ago bash_myimage
As you can see the resulting image is bigger than stock images provided by Red Hat.
At this moment I don’t know if this is the minimum package set which satisfies
dependencies or anaconda adds a bit more on its own. The full package list is
given below. There are some packages like device-mapper, dracut, e2fsprogs,
iptables, kexec-tools, SELinux related, systemd and tzdata which look out
of place. My guess is some of them are pulled in from the various kickstart
commands and not really necessary. I will follow up with devel and see if
the content can be stripped down even more.
In my previous post
I’ve talked about testing anaconda and friends and raised some questions.
Today I’m going to give an example of how to answer one of them:
“How different is the code execution path between different tests?”
coverate-tools
I’m going to use coverage-tools
in my explanations below so a little introduction is required. All the tools
are executable Python scripts which build on top of existing coverage.py API.
The difference is mainly in flexibility of parameters and output formatting.
I’ve tried to keep as close as possible to the existing behavior of coverage.py.
coverage-annotate - when given a .coverage data file prints the source code
annotated with line numbers and execution markers.
!!!missing/usr/lib64/python2.7/site-packages/pyanaconda/anaconda_argparse.py>>>covered/usr/lib64/python2.7/site-packages/pyanaconda/anaconda_argparse.py...skip...37>importlogging38>log=logging.getLogger("anaconda")3940# Help text formatting constants4142>LEFT_PADDING=8# the help text will start after 8 spaces43>RIGHT_PADDING=8# there will be 8 spaces left on the right44>DEFAULT_HELP_WIDTH=804546>defget_help_width():47>""" 48 > Try to detect the terminal window width size and use it to 49 > compute optimal help text width. If it can't be detected 50 > a default values is returned. 51 52 > :returns: optimal help text width in number of characters 53 > :rtype: int 54 > """55# don't do terminal size detection on s390, it is not supported56# by its arcane TTY system and only results in cryptic error messages57# ending on the standard output58# (we do the s390 detection here directly to avoid59# the delay caused by importing the Blivet module60# just for this single call)61>is_s390=os.uname()[4].startswith('s390')62>ifis_s390:63!returnDEFAULT_HELP_WIDTH64...skip...
In the example above all lines starting with > were executed by the interpreter.
All top-level import statements were executed as you would expect. Then the method
get_help_width() was executed (called from somewhere). Because this was on x86_64
machine line 63 was not executed. It is marked with !. The comments and empty
lines are of no interest.
coverage-diff - produces git like diff reports on the text output of annotate.
1234567891011
--- a/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/source.py+++ b/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/source.py@@ -634,7 +634,7 @@ 634 # Wait to make sure the other threads are done before sending ready, otherwise
635 # the spoke may not get be sensitive by _handleCompleteness in the hub.
636 > while not self.ready:
- 637 ! time.sleep(1)+ 637 > time.sleep(1) 638 > hubQ.send_ready(self.__class__.__name__, False)
639
640 > def refresh(self):\
In this example line 637 was not executed in the first test run, while it was executed
in the second test run. Reading the comments above it is clear the difference between
the two test runs is just timing and synchronization.
Kickstart vs. Kickstart
How different is the code execution path between different tests? Looking at
Fedora 23 test results
we see several tests which differ only slightly in their setup - installation
via HTTP, FTP or NFS; installation to SATA, SCSI, SAS drives; installation using
RAID for the root file system; These are good candidates for further analysis.
Note: my results below are not from Fedora 23 but the conclusions still apply!
The tests were executed on bare metal and virtual machines, trying to use the
same hardware or same systems configurations where possible!
--- a/usr/lib64/python2.7/site-packages/pyanaconda/packaging/__init__.py+++ b/usr/lib64/python2.7/site-packages/pyanaconda/packaging/__init__.py@@ -891,7 +891,7 @@ 891
892 # Run any listeners for the new state
893 > for func in self._event_listeners[event_id]:
- 894 ! func()+ 894 > func() 895
896 > def _runThread(self, storage, ksdata, payload, fallback, checkmount):
897 # This is the thread entry
--- a/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/lib/resize.py+++ b/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/lib/resize.py@@ -102,10 +102,10 @@ 102 # Otherwise, fall back on increasingly vague information.
103 > if not part.isleaf:
104 > return self.storage.devicetree.getChildren(part)[0].name
- 105 > if getattr(part.format, "label", None):+ 105 ! if getattr(part.format, "label", None): 106 ! return part.format.label
- 107 > elif getattr(part.format, "name", None):- 108 > return part.format.name+ 107 ! elif getattr(part.format, "name", None):+ 108 ! return part.format.name 109 ! else:
110 ! return ""
111
@@ -315,10 +315,10 @@ 315 > def on_key_pressed(self, window, event, *args):
316 # Handle any keyboard events. Right now this is just delete for
317 # removing a partition, but it could include more later.
- 318 > if not event or event and event.type != Gdk.EventType.KEY_RELEASE:+ 318 ! if not event or event and event.type != Gdk.EventType.KEY_RELEASE: 319 ! return
320
- 321 > if event.keyval == Gdk.KEY_Delete and self._deleteButton.get_sensitive():+ 321 ! if event.keyval == Gdk.KEY_Delete and self._deleteButton.get_sensitive(): 322 ! self._deleteButton.emit("clicked")
323
324 > def _sumReclaimableSpace(self, model, path, itr, *args):
--- a/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/source.py+++ b/usr/lib64/python2.7/site-packages/pyanaconda/ui/gui/spokes/source.py@@ -634,7 +634,7 @@ 634 # Wait to make sure the other threads are done before sending ready, otherwise
635 # the spoke may not get be sensitive by _handleCompleteness in the hub.
636 > while not self.ready:
- 637 ! time.sleep(1)+ 637 > time.sleep(1) 638 > hubQ.send_ready(self.__class__.__name__, False)
639
640 > def refresh(self):
The difference in source.py is from timing/synchronization and can safely be ignored.
I’m not exactly sure about __init__.py but doesn’t look much of a big deal.
We’re left with resize.py. The differences in on_key_pressed() are because
I’ve probably used the keyboard instead the mouse (these are indeed manual installs).
The other difference is in how the partition labels are displayed. One of the installs
was probably using fresh disks while the other not.
--- a/usr/lib64/python2.7/site-packages/pyanaconda/bootloader.py+++ b/usr/lib64/python2.7/site-packages/pyanaconda/bootloader.py@@ -109,10 +109,10 @@ 109 > try:
110 > opts.parity = arg[idx+0]
111 > opts.word = arg[idx+1]
- 112 ! opts.flow = arg[idx+2]- 113 ! except IndexError:- 114 > pass- 115 > return opts+ 112 > opts.flow = arg[idx+2]+ 113 > except IndexError:+ 114 ! pass+ 115 ! return opts 116
117 ! def _is_on_iscsi(device):
118 ! """Tells whether a given device is on an iSCSI disk or not."""
@@ -1075,13 +1075,13 @@ 1075 > command = ["serial"]
1076 > s = parse_serial_opt(self.console_options)
1077 > if unit and unit != '0':
- 1078 ! command.append("--unit=%s" % unit)+ 1078 > command.append("--unit=%s" % unit) 1079 > if s.speed and s.speed != '9600':
1080 > command.append("--speed=%s" % s.speed)
1081 > if s.parity:
- 1082 ! if s.parity == 'o':+ 1082 > if s.parity == 'o': 1083 ! command.append("--parity=odd")
- 1084 ! elif s.parity == 'e':+ 1084 > elif s.parity == 'e': 1085 ! command.append("--parity=even")
1086 > if s.word and s.word != '8':
1087 ! command.append("--word=%s" % s.word)
As you can see the difference is minimal, mostly related to the underlying
hardware. As far as I can tell this has to do with how the bootloader is
installed on disk but I’m no expert on this particular piece of code.
I’ve seen the same difference in other comparisons so it probably has to do
more with hardware than with what kind of disk/driver is used.
As far as I can tell the difference is related to hardware clock settings,
probably due to different defaults in BIOS on the various hardware.
Additional tests with RAID 5 and RAID 6 reveals the same exact difference.
RAID 0 vs. RAID 10 shows no difference at all. Indeed as far as I know anaconda
delegates the creation of RAID arrays to mdadm once the desired configuration
is known so these results are to be expected.
Conclusion
As you can see sometimes there are tests which appear to be very important
but in reality they cover a corner case of the base test. For example if any
of the RAID levels works we can be pretty confident
all of them workthey won’t break in anaconda
(thanks Adam Williamson)!
What you do with this information is up to you. Sometimes QA is able to
execute all the tests and life is good. Sometimes we have to compromise,
skip some testing and accept the risks of doing so. Sometimes you can
execute all tests for every build, sometimes only once per milestone.
Whatever the case having the information to back up your decision is vital!
In my next post on this topic I’m going to talk more about functional tests
vs. unit tests. Both anaconda and blivet have both kinds of tests and I’m
interested to know if tests from the two categories focus on the same
functionality how are they different. If we have a unit test for feature X,
does it warrant to spend the resources doing functional testing for X as well?
My previous post
was an introduction to testing installation related components. Now I’m going to
talk more about anaconda and how it is tested.
There are two primary ways to test anaconda. You can execute make check in the
source directory which will trigger the package test suite. The other possibility
is to perform an actual installation, on bare meta or virtual machine, using the
latest Rawhide snapshots which also
include the latest anaconda. For both of these methods we can collect code
coverage information. In live installation mode coverage is enabled via the
inst.debug boot argument. Fedora 23 and earlier use debug=1 but that
can lead to problems
sometimes.
Kickstart Testing
Kickstart
is a method of automating the installation of Fedora by supplying the necessary
configuration into a text file and pointing the installer at this file. There is
the directory tests/kickstart_tests, inside the anaconda source, where each
test is a kickstart file and a shell script. The test runner provisions a virtual
machine using boot.iso and the kickstart file. A shell script then verifies
installation was as expected and copies files of interest to the host system.
Kickstart files are also the basis for testing Fedora installations in
Beaker.
Naturally some of these in-package kickstart tests are the same as
out-of-band kickstart tests.
Hint: there are more available but not yet public.
The question which I don’t have an answer for right now is
“Can we remove some of the duplicates and how this affects devel and QE teams” ?
The pros of in-package testing are that it is faster compared to Beaker. The cons
are that you’re not testing the real distro (every snapshot is a possible final
release to the users).
Dogtail
Dogtail uses accessibility technologies to
communicate with desktop applications. It is written in Python and can be used
as GUI test automation framework. Long time ago I’ve proposed support for Dogtail
in anaconda which was rejected, then couple of years later it was accepted and
later removed from the code again.
Anaconda has in-package Dogtail tests (tests/gui/). They work by attaching
a second disk image with the test suite to a VM running a LiveCD. Anaconda is
started on the LiveCD and an attempt to install Fedora on disk 1 is made.
Everything is driven by the Dogtail scripts. There are only a few of these
tests available and they are currently disabled.
Red Hat QE has also created another method for running Dogtail tests in anaconda
using an updates.img with the previous functionality.
Even if there are some duplicate tests I’m not convinced we have to drop the
tests/gui/ directory from the code because
the framework used to drive the graphical interface of anaconda appears to be very
well written. The code is clean and easy to follow.
Also I don’t have metrics of how much these two methods differ or how much they cover
in their testing. IMO they are pretty close and before we can find a way to
reliably execute them on a regular basis there isn’t much to be done here.
One idea is to use the --dirinstall or --image options and skip the
LiveCD part entirely.
How Much is Tested
make ci covers 10% of the entire code base for anaconda. Mind you that
tests/storage and tests/gui are currently disabled.
See PR #346,
PR #327 and
PR #319!
There is definitely room for improvement.
On the other hand live installation testing is much
better. Text mode covers around 25% while graphical installations around 40%.
Text and graphical combined cover 50% though. These numbers will drop quite a bit
once anaconda learns to
include all possible files
in its report but it is a good estimate.
The important questions to ask here are:
How much can PyUnit tests cover in anaconda?
How much can kickstart tests cover ?
Have we reached a threshold in any of the two primary methods for testing ?
Does UI automation (with Dogtail) improve anything ?
When testing a particular feature (say user creation) how different is the
code execution path between manual (GUI) testing, kickstart and unit testing ?
If not so different can we invest in unit tests instead of higher level tests then ?
How different is the code execution path between different tests (manual or kickstart) ?
In other words how much value are we getting from testing for the resources we’re putting in ?
In my next post I will talk more about these questions and some rudimentary
analysis against coverage data from the various test methods and test cases!
Since early 2015 I’ve been working on testing installation related
components in Rawhide. I’m interested in the code produced by the
Red Hat Installer Engineering Team and in
particular in anaconda, blivet, pyparted and pykickstart. The goal of
this effort is to improve the overall testing of these components and also
have Red Hat QE contribute some of our knowledge back to the community. The benefit
of course will be better software for everyone. In the next
several posts I’ll summarize what has been done so far and what’s to be expected
in the future.
Test Documentation Matters
Do you want others to contribute tests? I certainly do! When I started looking
at the code it was obviously clear there was no documentation related to testing.
Everyone needs to know how to write and execute these tests! Currently we have
basic README files describing how to install necessary dependencies for development
and test execution, how to execute the tests (and what can be tested) and most
importantly what is the test architecture. There is description of how the file
structure is organized and which are the base classes to inherit from when adding
new tests. Most of the times each component goes through a pylint check and
a standard PyUnit test suite.
Test documentation is usually in a tests/README file. For example:
I’ve tried to explain as much as possible without bloating the files and going into
unnecessary details. If you spot something missing please send a pull request.
Continuous Integration
This has been largely an effort driven by Chris Lumens from the devel team.
All the components I’m interested in are tested regularly in a CI environment.
There is a make ci Makefile target for those of you interested in what exactly
gets executed.
Test Coverage
In order to improve something you need to know where you stand. We’ll I didn’t.
That’s why the first step was to integrate the
coverage.py tool with all of these components.
With the exception of blivet (written in C) all of the other
components integrate well with coverage.py and produce good statistics. pykickstart is
the champ here with 90% coverage, while anaconda is somewhere between 10% and 50%.
Full test coverage measurement for anaconda isn’t straight forward and will be the
subject of my next post. For the C based code we have to hook up with
Gcov which shouldn’t be too difficult.
At the moment there are several open pull requests to integrate the coverage test
targets with make ci and also report the results in human readable form. I will be
collecting these for historical references.
Tools
I’ve created some basic text-mode
coverage-tools to help me combine and
compare data from different executions. These are only the start of it and I’m expanding
them as my needs for reporting and analytics evolve. I’m also looking into
more detailed coverage reports
but I don’t have enough data and use cases to work on this front at the moment.
Some ideas currently in mind:
map code changes (git commits) to existing test coverage to get a feeling where to
invest in more testing;
map bugs to code areas and to existing test coverage to see if we aren’t
missing tests in areas where the bugs are happening;
Bugs
coverage.py is a very nice tool indeed but I guess most people use it in a very
limited way. Shortly after I started working with it I’ve found several places which
need improvements. These have to do with combining and reporting on multiple files.
Some of the interesting issues I’ve found and still open are:
In my next post I will talk about anaconda code coverage and what I want to do with it.
In the mean time please use the comments to share your feedback.
I’ve previously written about my
Thunderbolt to Ethernet adapter working on Linux
despite claims that it should not. Recently I’ve used my MacBook to do a presentation
and the Thunderbolt to VGA adapter worked well enough.
It was an Acer adapter but I have no more details b/c it wasn’t mine.
Before the event I’ve tested it and it worked so on the day of the event I’ve
freshly rebooted my laptop to be sure no crashed processes or anything like that
was running and gave it a go.
First time I plugged in the MacBook everything worked like a charm. Then my computer was
unplugged and the lid closed, causing it to suspend. The second time I’ve plugged it in
I was told there was nothing showing on the projector so I quickly plugged the adapter out
and then back in. It worked more or less.
At the time I had LibreOffice Impress in presentation
mode but I did see ABRT detecting a kernel problem. When my slides popped up the text
on the first one was mostly missing but the rest were ok!
Mind you I’m still running RHEL 7 on my MacBook Air. The above is
with kernel-3.10.0-229.14.1.el7.x86_64.
In software testing, usually unit testing, test stubs are programs that simulate
the behaviors of external dependencies that a module undergoing the test depends
on. Test stubs provide canned answers to calls made during the test.
I’ve discovered an improperly written stub method in one of
DNF’s tests:
def_get_query(self,pkg_spec):"""Return a query to match a pkg_spec."""subj=dnf.subject.Subject(pkg_spec)q=subj.get_best_query(self.base.sack)q=q.available()q=q.latest()iflen(q.run())==0:msg=_("No package "+pkg_spec+" available.")raisednf.exceptions.PackageNotFoundError(msg)returnqdef_get_query_source(self,pkg_spec):""""Return a query to match a source rpm file name."""pkg_spec=pkg_spec[:-4]# skip the .rpmnevra=hawkey.split_nevra(pkg_spec)q=self.base.sack.query()q=q.available()q=q.latest()q=q.filter(name=nevra.name,version=nevra.version,release=nevra.release,arch=nevra.arch)iflen(q.run())==0:msg=_("No package "+pkg_spec+" available.")raisednf.exceptions.PackageNotFoundError(msg)returnq
As seen here stub_fn replaces the _get_query methods from the class under
test. At the time of writing this has probably seemed like a good idea to
speed up writing the tests.
The trouble is we should be replacing the external dependencies of _get_query
(other parts of DNF essentially) and not methods from DownloadCommand. To
understand why this is a bad idea check
PR #113,
which directly modifies _get_query. There’s no way to test this patch
with the current state of the test.
So I took a few days to experiment and update the current test stubs. The
result is
PR #118.
The important bits are the SackStub and SubjectStub classes which hold
information about the available RPM packages on the system. The rest are cosmetics
to fit around the way the query objects are used (q.available(), q.latest(), q.filter()).
The proposed design correctly overrides the external dependencies on
dnf.subject.Subject and self.base.sack which are initialized before our
plugin is loaded by DNF.
I must say this is the first error of this kind I’ve seen in my QA practice so far.
I have no idea if this was a minor oversight or something which happens more frequently
in open source projects but it’s a great example nevertheless.
In the last week I’ve been trying to figure out how many packages
conform to the new
Harden All Packages
policy in Fedora!
From 46884 RPMs, 17385 are ‘x86_64’ meaning they may contain ELF objects.
From them 4489 are reported as failed checksec.
What you should see as the output from checksec is
Full RELRO Canary found NX enabled PIE enabled No RPATH No RUNPATH
Full RELRO Canary found NX enabled DSO No RPATH No RUNPATH
The first line is for binaries, the second one for libraries b/c
DSOs on x86_64 are always position-independent. Some RPATHs are acceptable,
e.g. %{_libdir}/foo/ and I’ve tried to exclude them unless
other offenses are found. The script which does this is
checksec-collect.
Most often I’m seeing Partial RELRO, No canary found and No PIE errors.
Since all packages potentially process untrusted input, it makes sense for all of them
to be hardened and enhance the security of Fedora. That’s why all of these errors
should be considered valid bugs.
Attn package maintainers
Please see if your package is in the list and try to fix it or let me know
why it should be excluded, for example it’s a boot loader and doesn’t function
properly with hardening enabled. The full list is available at
GitHub.
For more information about the different protection mechanisms see the following
links:
Do you remember the
pedometer bug in Samsung Gear Fit
I’ve discovered earlier ? It turns out that Samsung is a fan of this one
and has the exact same bug in their S Health application.
The application doesn’t block pedometer(e.g. steps counting) while
performing other activities such as cycling for example. So in reallity it
reports incorrect value for burned callories. At this time I call it
bad software development practice/architecture on Samsung’s part which leads
to this bug being present.
When editing the grub2 menu (especially in EFI mode) it tells you to
press Ctrl-x to save your changes and continue the boot process.
However this doesn’t work on Apple hardware
(rhbz#1253637)
and maybe some other platforms. If this is the case try pressing F10
instead. It works for me!
If you are working with Python and writing unit tests chances are you are
familiar with the coverage reporting
tool. However there are testing scenarios in which we either don’t use unit tests
or maybe execute different code paths(test cases) independent of each other.
For example, this is the case with installation testing in Fedora. Because anaconda
- the installer is very complex the easiest way is to test it live, not with unit tests.
Even though we can get a coverage report (anaconda is written in Python) it reflects
only the test case it was collected from.
coverage combine can be used to combine several data files and produce an aggregate
report. This can tell you how much test coverage you have across all your tests.
As far as I can tell Python’s coverage doesn’t tell you how many times a particular
line of code has been executed. It also doesn’t tell you which test cases executed
a particular line
(see PR #59).
In the Fedora example, I have the feeling many of our tests are touching the same
code base and not contributing that much to the overall test coverage.
So I started working on these items.
I imagine a script which will read coverage data from several test executions
(preferably in JSON format,
PR #60) and produce a
graphical report similar to what GitHub does for your commit activity.
The example uses darker colors to indicate more line executions, lighter for less
executions. Check the HTML for the actual numbers b/c there are no hints yet.
The input JSON files are
here and
the script to generate the above HTML is at
GitHub.
Now I need your ideas and comments!
What kinds of coverage reports are you using in your job ? How do you generate them ?
How do they look like ?
Open, coordinated e-government for citizens and business;
What has already been done by municipalities in Bulgaria;
Strategy of Sofia municipality for open data;
Upcoming open data programming contest organized by Sofia municipality.
I will also be participating in this event by covering two topics I’m close to:
Entrepreneurship and open data - sharing my limited experience with Difio
and processing open data;
Technical tips for successful open data hackathon - sharing my observations as
a mentor at HackFMI and giving some recommendations which will
help the alleged open data contest make a difference not being just another dull event
organized by governmental agencies.
So far other confirmed speakers are Rado from HackBulgaria/
HackFMI and Obshtestvo.bg which are
working in the field of open government and open data.
I’m also in touch with the event organizers and helping a little bit with the program.
If you’re interested in speaking please get in touch with me ASAP.
It’s been a busy week after DEVit conf took place in
Thessaloniki. Here are my impressions.
Sessions
I’ve started the day with the session called “Crack, Train, Fix, Release” by
Chris Heilmann. While it was very interesting for some unknown reason I
was expecting a talk more closely related to software testing. Unfortunately at the
same time in the other room was a talk called “Integration Testing from the Trenches”
by Nicolas Frankel which I missed.
At the end Chris answered the question “What to do about old versions of IE ?”.
And the answer pretty much was “Don’t try to support everything, leave them with
basic functionality so that users can achieve what they came for on your website.
Don’t put nice buttons b/c IE 6 users are not used to nice things and they get confused.”
If you remember I had a similar question to Jeremy Keith at
Bulgaria Web Summit last month
and the answer was similar:
Q: Which one is Jeremy’s favorite device/browser to develop for. A: Your approach is wrong and instead we should be thinking in terms of what features are essential or non-essential for our websites and develop around features (if supported, if not supported) not around browsers!
Btw I did ask Chris if he knows Jeremy and he does.
After the coffee break there was “JavaScript ♥ Unicode” by Mathias Bynens which
I saw last year at How Camp in Veliko Tarnovo so I just stopped by
to say hi and went to listen to
“The future of responsive web design: web component queries” by Nikos Zinas.
As far as I understood Nikos is a local rock-star developer. I’m not much into web
development but the opportunity to create your own HTML components (tags) looks
very promising. I guess there will be more business coming for
Telerik :).
I wanted to listen to “Live Productive Coder” by Heinz Kabutz but that one started
in Greek so I switched the room for
“iOS real time content modifications using websockets” by Benny Weingarten-Gabbay.
After lunch I went straight for
“Introduction to Docker: What is it and why should I care?” by Ian Miell
which IMO was the most interesting talk of the day. It wasn’t very technical but
managed to clear some of the mysticism around Docker and what it actually is.
I tried to grab a few minutes of Ian’s time and we found topics of common interest
to talk about (Project Atomic anyone?) but later
failed to find him and continue the talk. I guess I’ll have to follow online.
Tim Perry with “Your Web Stack Would Betray You In An Instant” made a great show.
The room was packed, I myself was actually standing the whole time. He described a series
of failures across the entire web development stack which gave developers hard times
patching and upgrading their services. The lesson: everything fails, be prepared!
The last talk I visited was “GitHub Automation” by Forbes Lindesay. It was more of an
inspirational talk, rather than technical one. GitHub provides cool API so why not use it?
Organization
From what I know this is the first year of DEVit. For a first timer the team did great!
I particularly liked the two coffee breaks before lunch and in the early afternoon and the
sponsors pitches in between the main talks.
All talks were recorded but I have no idea what’s happening with the videos!
I will definitely make a point of visiting Thessaloniki more often and follow the local
IT and start-up scenes there. And tonight is Silicon Drinkabout which will be the official
after party of DigitalK in Sofia.
There’s a huge list of
free books
on the topic of software testing. This will definitely be my summer reading list.
I hope you find it helpful.
200 Graduation Theses About Software Testing
The guys from QAHelp have compiled a list of 200
graduation theses from various universities which are freely accessible
online. The list can be found
here.
IMO this is relatively easy to patch and allow sysctl to read/write values under /sys.
The only open question I see is backward compatibility - maybe adding new parameter (e.g. –sysfs)
or adding extended sytax e.g. if variable name starts with / then treat it as absolute path.
I’ve asked sysctl maintainers on the
procps mailing list
but so far got no answer.
Is anyone else interested in this? How do you set parameter values under /sys then ?
NOTE: for my particular purposes I could have used config files under
/etc/modprobe.d/ or a startup script (I used that) instead.
A quick solution for MacBook Air users running Linux who want to
use external projector is to use a USB to VGA adapter. Mine is
Plugable UGA-165
and it works great with Red Hat Enterprise Linux 7.1.
After the device is plugged in the udl kernel module is loaded
and a new framebuffer device is created (/dev/fb1 in my case). Using
mate-display-properties I’m able to configure the 2nd monitor attached
to the USB video card. I was able to succeffully display an OpenOffice
presentation on the 2nd monitor and play YouTube video.
All USB 2.0 devices from Plugable should be well supported on Linux.
For USB 3.0 David Airlie from Red Hat is doing some reverse engineering
but I have no idea what the status is. For more info see:
After plugging the device is automatically recognized and the tg3 driver is loaded.
Detailed lspci below:
0a:00.0 Ethernet controller: Broadcom Corporation NetXtreme BCM57762 Gigabit Ethernet PCIe
Subsystem: Apple Inc. Device 00f6
Physical Slot: 9
Flags: bus master, fast devsel, latency 0, IRQ 19
Memory at cd800000 (64-bit, prefetchable) [size=64K]
Memory at cd810000 (64-bit, prefetchable) [size=64K]
[virtual] Expansion ROM at cd820000 [disabled] [size=64K]
Capabilities: [48] Power Management version 3
Capabilities: [50] Vital Product Data
Capabilities: [58] MSI: Enable- Count=1/8 Maskable- 64bit+
Capabilities: [a0] MSI-X: Enable+ Count=6 Masked-
Capabilities: [ac] Express Endpoint, MSI 00
Capabilities: [100] Advanced Error Reporting
Capabilities: [13c] Device Serial Number 00-00-ac-87-a3-25-20-33
Capabilities: [150] Power Budgeting <?>
Capabilities: [160] Virtual Channel
Capabilities: [1b0] Latency Tolerance Reporting
Kernel driver in use: tg3
Unplugging and pluggin back in the network cable works as expected.
I did see my computer freeze 2 out of 10 times when I’ve unplugged the Thunderbolt
adapter but couldn’t reproduce it repliably or grab more info.
For the record this is with kernel 3.10.0-229.1.2.el7.x86_64 which is missing
this
upstream commit.
I’m not sure why it works though.
If I remember correctly tg3 is available during installation so you should
be able to use the Thunderbolt adapter instead of WiFi as well.
One of the best SIP clients for Linux is Twinkle.
However upstream is not active (or even maybe dead) and the package is missing from
latest Fedora releases and fails to build on RHEL 7.
First you need to build and install a few dependencies in the following order:
ucommon,
ccrtp,
libzrtpcpp.
You will also need EPEL 7 enabled
to satisfy build dependencies.
Then apply the following patch to the original
twinkle.spec
The package now builds, installs and runs successfully on RHEL 7.
The compiled packages and dependencies are available in my
Macbook Air RHEL 7 repository.
Either you have to add the above commands in a boot script or you can
yum install mba-kbd-fix from my
Macbook Air RHEL 7 repository.
The RPM source can be found here.