atodorov.org


you can logoff, but you can never leave

What I Learned from IT Weekend

IT Weekend

Last week I attended the first IT Weekend in Bulgaria. It's like a training camp for athletes but for QA engineers. There were 20 people attending and the format was very friendly and relaxed. The group had members with various levels of experience and technical skills, also different areas they work in. All presentations are on YouTube. Here's a brief of what happened and what I learned.

I had the honor to present in the first slot and gave a quick introduction to mutation testing. This was my first time giving this talk and I'm not entirely happy with how I've presented it. Also mutation testing is touching a lot on unit tests, programming and source code which in some organizations goes to the devel department. I think mutation testing is harder to understand from people not familiar with it than I initially thought. I'm taking note to improve the way I present this topic to the public.

Yavor Donev gave a good overview of Appium and how to use it on Android. The most important question for me was "Is it possible to utilize the same test suite on Android and iOS, given that the environments are different". With this I mean regardless of how much we try to make the same (native) app on both platforms it will end up differently because the platforms are essentially different. For example there is a different number of physical buttons available.

If we assume that both iOS and Android apps follow the same design and use similar workflow then it should be possible to create a test suite which is platform aware and account for the minor differences. We're also adding another layer of complexity by introducing the requirement that both apps stay in sync with each other and account for the quirks of the foreign platform. Depending on your apps and goals this may not be an easy task!

The one thing I didn't like about Appium is that upstream doesn't care much about version compatibility and they tend to break and change stuff arbitrary between releases. That said if it works, don't update it or otherwise be extremely careful.

I also had a nice chat with Yavor on the topic of career change, learning to program and working with people who have very little coding experience. His approach is to develop a higher level test framework on top of Appium which his team mates can use more easily.

Aneta Petkova's Selenium Grid in Unix Environment is a bit out of my domain. However I took one important lesson: regardless of how great your tools are there are minor details which can make or break your day. In her case these are the physical location of the tests (e.g. which Selenium node runs them) and access to shared resources. Turns out WebDriver doesn't give you this information directly and you need to go through hoops to get it. Her solution was to place the test code on the Grid Hub and provide a shared file system.

The bigger lesson is: whenever you have to design an automated test environment (aka test lab) make sure to evaluate your needs beforehand.

The last talk was a guest appearance by Denitsa Evtimova. She is a QA architect with 16 years of experience and presented the QA strategy at Paysafe group. They have a large monolithic system (legacy code) and have adopted a pyramid style approach to testing. Whenever possible tests are brought down to the lowest level (e.g. unit tests) and not repeated on the higher levels. At the top stand manual testing. Teams are small: 3-4 developers and 2-3 QAs. It is the team responsibility to make sure tests are implemented at the lowest possible level. The process is not strictly enforced and the company relies more on self governance in this aspect. Also everyone on the team can contribute additional tests whenever they see something missing. Test (writing) tasks are all logged in JIRA. They are also small so that everything can be completed in the same day.

The second day was more informal. We did a quick exploratory testing exercise and shared opinions on different test tools. Then the group had a discussion about soft skills and how QA engineers can change the perception of developers about the QA profession (especially in teams where there are many manual testers). The key points are:

  • Criticize the software, not the person, e.g. don't blame the person directly;
  • Communicate with concrete facts and data, not emotions and perceptions;
  • Jokes of the type "how many QA engineers are needed to screw a light bulb" are a problem because they lead to underestimation of the job role;
  • Sometimes it is not quite clear (to others) how the QA role contributes to the development of the product and the organization;
  • For a QA it is important to be able to give a non-biased opinion and observations on what is happening with the product/process;
  • A QA person needs to be very calm. They have to be able to listen to everybody (especially developers) and accept their point of view but at the same time also communicate their own point of view.
  • It is important to sit together with developers and observe the problem, brainstorm and propose possible solutions. This also creates a feedback loop where the developer feels empowered because he's part of the process identifying the problem and proposing the best solution;
  • In agile teams it is a good idea to rotate people between developer and QA positions. This will help them better understand the job of others, acquire new skills and also bring fresh thinking to the team;
  • Quality Assurance is an ungrateful job and only people with very calm and methodical thinking (to follow through and write all possible scenarios) are able to excel in this field. On the other hand developer usually think about the happy path scenarios and strive to make their code work as best as they can;
  • By rotating job roles within the team developers will quickly find out that testing is not their field and gain respect towards their QA peers;
  • US managers have the habit of telling "good job" to everyone, even for small and routine tasks. In Bulgaria (and maybe elsewhere) we're not used to this. Instead we're used of being scolded when we do something wrong. If everything is good then we don't receive any recognition;
  • Using the American "good job" is actually a good thing. Team mates will start performing better over time because they will feel their work is valued and not meaningless, they will feel recognized which will boost morale and productivity.

Thanks for reading and happy testing!

There are comments.

Peter Sabev on Test Automation

the automation snake chart

Last week Peter Sabev gave his talk "On Reporting Bugs: Errors Made and Lessons learned" for DEV.bg (watch in Bulgarian). At the end of the talk there was a quick question how would he approach automation. I have always approached automation in terms of manpower and skills available within the team while he proposed an approach based on return of investment.

Given that you have a team with strong understanding of the software (code) under test and they have good coding skills then start with the hardest test cases first. This way the team will have lots of hard work upfront and there will be some lead time without visible results. However when the hardest/most complex test cases are already automated you will most likely have covered a big portion of the SUT.

On the contrary, when you start with the easiest test cases first then the team will progress gradually and have enough time to get to grips with the SUT. You are also more likely to see more regressions or bugs missed. With this approach every subsequent automated test will be harder to write and more complex than the previous one. This is a good fit for team who don't have strong experience with test automation and/or are unfamiliar with the product.

Peter proposes a different approach. He plots the test cases as dots, based on how much time they take to execute manually and how much time/how hard is it to automate the particular case. Then you start to move from the lower right corner towards the upper left corner in a weaving motion, like a snake,

His argument is that once you automate the test cases which are not very complex but require lots of time to execute by hand then you free up resources within the team. As you progress up the chart the test cases become harder to automate and yield less return of investment because they don't take some much time to execute manually.

For more information about Peter's approach please read his article.

As you can see from the snake chart the team constantly faces test scenarios jumping up and down on the automation hardness scale. Which also means that you need to have the suitable skills within the team. IMO this is best suited for teams where each member has different degree of experience. I'm also in favor of using the snake chart as a tool to distribute automation tasks within the team.

If you'd like to hear more about Peter's and mine views on manual vs. automated testing be sure to follow DEV.bg. We are going to host a discussion on October 18th so stay tuned!

There are comments.

What I Learned from EuRuKo 2016

EuRuKo 2016

As my frequent readers may know I try to summarize all the conferences and events I go to. This year's EuRuKo inspired me to take a different approach and instead of quickly summarizing the event I will try to highlight what I have learned from it! My intention is to use this as a tool to improve my skills and the work I do. It will probably be a long post so here we go.

Let me say that I don't consider myself a Ruby developer although I do write a small amount of Ruby code. I also don't really consider myself a developer although I have a formal degree in software engineering and do my fair share of open source contributions.

Being different and thinking differently has always been helpful to me in Quality Assurance and this time was no exception. Attending a conference I knew nothing about and meeting with people whose job is totally different than mine turned out to be my greatest experience on the conference circuit this year.

Lesson 1

Get out of the comfort zone, meet new people, exchange ideas and learn! The very fact that I am writing this post not following my usual summary style proves this is working.

Very early during the event I started to notice a recurring theme which grew stronger by the minute. The Ruby community is very open and inclusive to newcomers and they seem to be doing a very good job about on-boarding everyone who wants to learn. I already wrote about Ivan Nemitchenko's experience of organizing remote internships and there are also the Rails Girls local communities, the Rails Girls Summer of Code (didn't know about it) and the various local Ruby communities who pitched their cities to host the next EuRuKo. I really loved this feeling of community. In the broader Linux, Python and QA world I have not seen this being so pronounced.

Lesson 2

Open up (the open source) community even more. Make it easier for newcomers to join! Treat them as human and don't expect them to be like yourself. Do teach and mentor both to help newcomers but also to help yourself become better!

This is mostly on par with my community work but I think I can do better. I will take the time to evaluate what I've been doing in the past and identify areas for improvements. I also encourage my readers and students to send me feedback as well.

I've also learned that junior developers can make meaningful contributions to production grade code when they are given the appropriate set of tasks and guidance. Stephanie Nemeth argued that companies should hire (more) enthusiastic career changers as junior developers because they have very strong motivation for success.

Lesson 3

Re-evaluate how we look at junior developers, especially how we examine and hire them and how we on-board them.

Both lessons 2 and 3 are valid in the open source world and even more so in the corporate world.

I also liked the fact that some of the lightning talks were given by people who had no previous experience in Ruby. @TeamJoda2016 talked about what they did and learn throughout the summer and really cracked the room with their "oh and btw we are looking for a job" as their final slide!

Lesson 4

If you are new/inexperienced at something don't be afraid to try it out. Give it the best you've got and see how it goes. Worst case .... well nothing bad really happens, best case you end up doing the best job in your life. That's also been my personal experience with software testing.

Carina C. Zona's Consequences Of An Insightful Algorithm (old video here) dealt with the ethical responsibilities of us as developers and this is becoming more common with deep learning neural networks.

Lesson 5

We’re able to extract remarkably precise intuitions about an individual. But do we have a right to know what they didn’t consent to share, even when they willingly shared the data that leads us there?

Krissy's The HTT(Pancake) Request made a great analogy of consuming APIs with your customer experience when visiting a restaurant.

Lesson 6

Design APIs (software in general) as if that was a physical product where your customers happiness matters. We see this all the time in our daily jobs and we're guilty of doing it as well. Btw at the moment I'm in the middle of huge refactoring of django-chartit which breaks all backwards compatibility. I guess I will have to re-evaluate my design and approach.

By accident I've made good friends with Alex Georgiev and the folks at Fyber. I liked the fact that at the conference they had couple of people working in QA and we managed to have a nice talk about QA vs developers and the transformation between the two. That also touched on the bigger subject of testers not being able to code and testers not being available for hire.

Lesson 7

Driving people to improve their skills (learn to code, write tests, etc) is possible but needs to come from management, needs clear direction and also a little bit of peer pressure.

After all isn't that what an agile team is supposed to be ?

Now being the able to code, not entirely Ruby ignorant QA guy that I am I was immediately offered several positions in London and Berlin (and no I'm still staying in Sofia). As it turned out good QA engineers with good development skills are in greater demand than developers not only in Sofia but all around the world!

Lesson 8

Fellow QA guys, please do learn to program. Dear developers, please try thinking more like a tester the next time you write code (me included).

Hiring a barista and furnishing your company stand with the best coffee machine you can afford while having an ugly hand written sign saying "MAIN CONFERENCE COFFEE ->" is a marketing stunt that I really love. I'm not sure how well that worked for their hiring but it got them visibility. I'm definitely stealing this one!

Lesson 9

Conference coffee sucks. Provide better one and developers will queue at your stand. To a greater extent - research your target and their needs and provide a product that solves their problem.

What we gave back

indeed Monica it is. Here's the secret sauce

My personal contribution back was telling Yammer and Deliveroo about mutation testing and pointing them to the right tools and videos on the subject. I wish them good luck and happy testing.

NOTE: I will be speaking about mutation testing at several different events in Bulgaria in the next 2 months so make sure to find me if you want to chat.

There are comments.

What Ivan Learned from Organizing Internships

This is a summary of Ivan Nemytchenko's talk at EuRuKo yesterday (slides here). I'm writing this because that was the best talk both in terms of content and visual presentation I saw at the conference and because it is closely related to my work with HackBulgaria.

The short story is that at some point Ivan was mentoring several junior developers and saw the need to scale this effort so he did a call for interns and got back 60 replies.

What an Intern Gets

  1. Projects in their portfolio
  2. Working experience, including team work
  3. Developing an entire product from idea to production

Ivan wanted to find suitable interns who have basic Ruby on Rails knowledge and who could invest a minimum of 20 hours per week of their time so he devised an aptitude test of 3 parts.

Part 1 is developing basic functionality of the product. Part 2 was adding different user types which require different validation logic, etc. Part 3 was adding "purchasing" logic via external APIs. In Part 3 intentionally there was no code review!

The final result was shit! That was the purpose of the test. The reasoning being that there is no right or wrong way to solve the problems he presented to the interns. Instead he wanted to make them think and decide on a solution. Then feel the pain of their decision. Ivan argues that what made us senior developers are these pains we have experienced at some point in our careers, those fuck-ups that we did in some old project. All of them made us better in our job because we could learn from the mistakes we've made and more importantly understand the consequence of our decisions.

The common mistakes Ivan saw were:

  • Ignoring levels of abstraction;
  • Using too many gems without knowing or understanding their limitations;
  • Gems were treated as the only way to solve a problem. More importantly changing this way was out of the question;
  • Interns didn't know about service objects, well even some experienced developers seem to not know that;
  • Business logic was all around the place;
  • Bad naming all around

The next thing Ivan did was a group hangout code review followed by a short lecture about design patterns, a refactoring session and finally cross code review. At the end the product was delivered as expected.

Following these initial efforts Ivan continued (with even more interns, or the next group of them I think) by asking interns to develop internship automatization, that is a means for the system to distribute tasks based on git commits, tags, etc so it can scale. They've added an admin dashboard and started working on an open source alternative to NewRelic (if I got that correctly). He was also able to enlist 2 more mentors to help him.

Problems Ivan found:

  • Not enough mentors and external projects to work on for all of the interns;
  • Treating a project as not real (e.g. not a real world product) is a mistake;
  • A training project has the same management issues that a real product will have and they need to be resolved in pretty much the same way;
  • There was collective irresponsibility from the group of interns. They didn't do what they said they will do;
  • There were communication issues between the interns and the lack of enough mentors was an obvious problem.
  • There was also lack of motivation.

I'd say these are the typical problems one also sees in almost any teams. It doesn't matter if these are teams of students or teams of developers inside some company.

What a Junior Needs

  • A real project to work on;
  • A business context, a reason why something should be done and why it needs to be done in a particular way;
  • Some visible achievement for their portfolio;
  • Team work experience;
  • Whole cycle development experience.

Ivan thinks that the aptitude test worked great because his interns were able to find good jobs afterwards but he will change a few things. There will be even more tests and he will reject unfit/bad interns. He will also do call for mentors not only for interns. And he wants to turn mentors' experience into tests as well.

I particularly like the "business context" item. IMO even seasoned developers need to have this if they are expected to create a great product for their company. We're not just coders but sometimes companies forget that!

I am also wondering how can I apply a similar aptitude test in my work (both mentoring at HackBulgaria and otherwise).

How about Senior Developers

  • They all have routine tasks;
  • and research tasks;
  • Nice to have features and
  • Low priority features;
  • Side project ideas
  • Missing features in their favorite open source projects

Senior developers' tasks and desires will have to align with what a junior needs in order for the mentorship to work. As senior devs we often make a mistake and expect everyone else to think the same way we do and act as fast as we do. Ideally senior developers want to have multiple clones of ourselves to work with! I myself have been guilty of that and trying to change.

In the context of a for-profit company the above findings should be taken into deep consideration if you are about to have interns.

After the talk I was lucky to talk to Ivan and tell him more about the training sessions at HackBulgaria. I also proposed to him the sponsorship model which he hasn't considered. He then made a counter offer: ask interns for high payment upfront and let them recoup that based on their progress towards the end.

I am really happy to have heard this presentation and being able to talk to Ivan in person. I also have my notes about my "QA and Automation 101" training at HackBulgaria and I now have a better idea how to go about organizing and summarizing them (will try to publish that soon).

Last but not least, Ivan works at GitLab and promised to look at an issue I personally have so here it is GitLab #7953 :).

Related reading

There are comments.

Questers Beer'n'Code Day 2.0

Last weekend I've visited Questers Beer'n'Code Day which was an open air mini-conference held at the terrace of their office. As to organization the only drawback was the summer sun which made it impossible to see anything on the screen. Most speakers were OK with that although they wanted to show some code examples.

I have recorded all talks and they are available in my TECH TALKS YouTube play list. You can also hear me asking some questions from behind the camera. All of the talks are in Bulgarian though, so sorry for my English speaking readers.

The afternoon started with Lidiya Georgieva and her talk about clean code and code smells. I find the topic particularly interesting but she didn't go into more details. She said she had used SonarCube but couldn't recommend any other tools, except for the standard lint style ones. I have been using LandScape.io for all Python based code I've been working on recently and I think it is great.

Another talk I found interesting was by my fellow QA Petar Sabev on reporting bugs. It was more of an entry level talk, but still very informative for both less experienced QAs and other technical folks so I definitely recommend it.

The last one, and most interesting, was Bogoi Bogdanov with Scaling Agile. Despite the name he covered some basics about Agile and what it actually is. Afterwards we've stayed and talked for a good 2 hours more. I definitely would like to hear more from him in the future.

A big thanks to Questers for hosting this event and allowing me to record it. Happy watching.

There are comments.

Python 2 vs. Python 3 List Sort Causes Bugs

Can sorting a list of values crash your software? Apparently it can and is another example of my Hello World Bugs. Python 3 has simplified the rules for ordering comparisons which changes the behavior of sorting lists when their contents are dictionaries. For example:

Python 2.7.5 (default, Oct 11 2015, 17:47:16) 
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux2
>>> 
>>> [{'a':1}, {'b':2}] < [{'a':1}, {'b':2, 'c':3}]
True
>>>
Python 3.5.1 (default, Apr 27 2016, 04:21:56) 
[GCC 4.8.3 20140911 (Red Hat 4.8.3-9)] on linux
>>> [{'a':1}, {'b':2}] < [{'a':1}, {'b':2, 'c':3}]
Traceback (most recent call last):
  File "<stdin>", line 1, in <module>
TypeError: unorderable types: dict() < dict()
>>>

The problem is that the second elements in both lists have different keys and Python doesn't know how to compare them. In earlier Python versions this has been special cased as described here by Ned Batchelder (the author of Python's coverage tool) but in Python 3 dictionaries have no natural sort order.

In the case of django-chartit (of which I'm now the official maintainer) this bug triggers when you want to plot data from multiple sources (models) on the same chart. In this case the fields coming from each data series are different and the above error is triggered.

I have worked around this in commit 9d9033e by simply disabling an iterator sort but this is sub-optimal and I'm not quite certain what the side effect might be. I suspect you may end up with a chart where the order of values on the X axis isn't the same for the different models, e.g. one graph plotting the data in ascending order the other one in descending.

The trouble also comes from the fact that we're sorting an iterator (a list of fields) by telling Python to use a list of dicts to determine the sort order. In this arrangement there is no way to tell Python how we want to compare our dicts. The only solution I can think about is creating a custom class and implementing a custom __cmp__() method for this data structure!

There are comments.

PhantomJS 2.1.1 in Ubuntu different from upstream

For some time now I've been hitting PhantomJS #12506 with the latest 2.1.1 version. The problem is supposedly fixed in 2.1.0 but this is not always the case. If you use a .deb package from the latest Ubuntu then the problem still exists, see Ubuntu #1605628.

It turns out the root cause of this, and probably other problems, is the way PhantomJS packages are built. Ubuntu builds the package against their stock Qt5WebKit libraries which leads to

$ ldd usr/lib/phantomjs/phantomjs | grep -i qt
    libQt5WebKitWidgets.so.5 => /lib64/libQt5WebKitWidgets.so.5 (0x00007f5173ebf000)
    libQt5PrintSupport.so.5 => /lib64/libQt5PrintSupport.so.5 (0x00007f5173e4d000)
    libQt5Widgets.so.5 => /lib64/libQt5Widgets.so.5 (0x00007f51737b6000)
    libQt5WebKit.so.5 => /lib64/libQt5WebKit.so.5 (0x00007f5171342000)
    libQt5Gui.so.5 => /lib64/libQt5Gui.so.5 (0x00007f5170df8000)
    libQt5Network.so.5 => /lib64/libQt5Network.so.5 (0x00007f5170c9a000)
    libQt5Core.so.5 => /lib64/libQt5Core.so.5 (0x00007f517080d000)
    libQt5Sensors.so.5 => /lib64/libQt5Sensors.so.5 (0x00007f516b218000)
    libQt5Positioning.so.5 => /lib64/libQt5Positioning.so.5 (0x00007f516b1d7000)
    libQt5OpenGL.so.5 => /lib64/libQt5OpenGL.so.5 (0x00007f516b17c000)
    libQt5Sql.so.5 => /lib64/libQt5Sql.so.5 (0x00007f516b136000)
    libQt5Quick.so.5 => /lib64/libQt5Quick.so.5 (0x00007f5169dad000)
    libQt5Qml.so.5 => /lib64/libQt5Qml.so.5 (0x00007f5169999000)
    libQt5WebChannel.so.5 => /lib64/libQt5WebChannel.so.5 (0x00007f5169978000)

While building from the upstream sources gives

$ ldd /tmp/bin/phantomjs | grep -i qt

If you take a closer look at PhantomJS's sources you will notice there are 3 git submodules in their repository - 3rdparty, qtbase and qtwebkit. Then in their build.py you can clearly see that this local fork of QtWebKit is built first, then the phantomjs binary is built against it.

The problem is that these custom forks include additional patches to make WebKit suitable for Phantom's needs. And these patches are not available in the stock WebKit library that Ubuntu uses.

Yes, that's correct. We need additional functionality that vanilla QtWebKit doesn't have. That's why we use custom version.

Vitaly Slobodin, PhantomJS

At the moment of this writing Vitaly's qtwebkit fork is 28 commits ahead and 39 commits behind qt:dev. I'm surprised Ubuntu's PhantomJS even works.

The solution IMO is to bundle the additional sources into the src.deb package and use the same building procedure as upstream.

There are comments.

On Python Infinite Loops

How do you write an endless loop without using True, False, number constants and comparison operators in Python ?

I've been working on the mutation test tool Cosmic Ray and discovered that it was missing a boolean replacement operator, that is an operator which will switch True to False and vice versa, so I wrote one. I've also added some tests to Cosmic Ray's test suite and then I hit the infinite loop problem. CR's test suite contains the following code inside a module called adam.py

while True:
    break

The test suite executes mutations on adam.py and then runs some tests which it expects to fail. During execution one of the mutations is replace break with continue which makes the above loop infinite. The test suite times out after a while and kills the mutation. Everything fails as expected and we're good.

Adding my boolean replacement operator broke this function. All of the other mutations work as expected but then the loop becomes

while False:
    break

When we test this particular mutation there is no infinite loop so Cosmic Ray's test suite doesn't time out like it should and an error is reported.

job ID 25:Outcome.SURVIVED:adam
command: cosmic-ray worker adam boolean_replacer 2 unittest -- tests
--- mutation diff ---
--- a/home/travis/build/MrSenko/cosmic-ray/test_project/adam.py
+++ b/home/travis/build/MrSenko/cosmic-ray/test_project/adam.py
@@ -32,6 +32,6 @@
     return x
 
 def trigger_infinite_loop():
-    while True:
+    while False:
         break

So the question becomes how to write the loop condition in such a way that nothing will mutate it but it will still remain true so that when break becomes continue this piece of code will become an infinite loop ? Using True or False constants obviously is a no go. Same goes for numeric constants, e.g. 1 or comparison operators like >, <, is, not, etc. - all of them will be mutated and will break the loop condition.

So I took a look at the docs for truth value testing and discovered my solution:

while object():
    break

I'm creating an object instance here which will not be mutated by any of the existing mutation operators.

Thanks for reading and happy testing!

There are comments.

Bug in TuxCon Website

TuxCon bug

Here comes July 9th 2016 and the start of TuxCon ... with a bug on their website! The image above is taken during the first talk of the conference. Obviously the count down timer is completely off.

In init.js:100 there is this piece of code

var finalDate = '2016/07/09';

$('div#counter').countdown(finalDate)
.on('update.countdown', function(event) {
    $(this).html(event.strftime('<span>%D <em>days</em></span>' +
                                '<span>%H <em>hours</em></span>' +
                                '<span>%M <em>minutes</em></span>' +
                                '<span>%S <em>seconds</em></span>'));
});

It counts backwards and updates the HTML until finalDate is reached. Then the HTML is no longer updated and the default values are shown, which in this case are non zero. A simple patch fixes the problem.

Initialize your variables properly and happy testing!

There are comments.

Testing the 8-bit computer Puldin

Puldin creators

Last weekend I visited TuxCon in Plovdiv and was very happy to meet and talk to some of the creators of the Puldin computer! On the picture above are (left to right) Dimitar Georgiev - wrote the text editor, Ivo Nenov - BIOS, DOS and core OS developer, Nedyalko Todorov - director of the vendor company and Orlin Shopov - BIOS, DOS, compiler and core OS developer.

Puldin is 100% pure Bulgarian development, while the “Pravetz” brand was copy of Apple ][ (Pravetz 8A, 8C, 8M), Oric (Pravets 8D) and IBM-PC (Pravetz 16). The Puldin computers were build from scratch both hardware and software and were produced in Plovdiv in the late 80s and early 90s. 50000 pieces were made, at least 35000 of them have been exported to Russia and paid for. A typical configuration in a Russian class room consisted of several Puldin computers and a single Pravetz 16. According to Russian sources the last usage of these computers was in 2003 serving as Linux terminals and being maintained without any support from the vendor (b/c it ceased to exist).

Puldin 601

One of the main objectives of Puldin was full compatibility with IBM-PC. At the time IBM had been releasing extensive documentation about how their software and hardware works which has been used by Puldin's creators as their software specs. Despite IBM-PC using faster CPU the Puldin 601 had a comparable performance due to aggressive software and compiler optimizations.

Testing wise the guys used to compare Puldin's functionality with that of IBM-PC. It was a hard requirement to have full compatibility on the file storage layer, that means floppy disks written on Puldin had to be readable on IBM-PC and vice versa. Same goes for programs compiled on Puldin - they had to execute on IBM-PC.

Everything of course had been tested manually and on top of that all the software had to be burned to ROM before you can do anything with it. As you can imagine the testing process had been quite slow and painful compared to today's standards. I've asked the guys if they'd happened to find a bug in IBM-PC which wasn't present in their code but they couldn't remember.

What was interesting for me on the hardware side was the fact that you can plug the computer directly to a cheap TV set and that it's been one of the first computers which could operate on 12V DC, powered directly from a car battery.

Pravetz 8

There was also a fully functional Pravetz 8 with an additional VGA port to connect it to the LCD monitor as well as a SD card reader wired to function as a floppy disk reader (the small black dot behind the joystick).

For those who missed it (and understand Bulgarian) I have a video recording on YouTube. For more info about the history and the hardware please check-out Olimex post on Puldin (in English). For more info on Puldin and Pravetz please visit pyldin.info (in Russian) and pravetz8.com (in Bulgarian) using Google translate if need be.

There are comments.

Testing Data Structures in Pykickstart

When designing automated test cases we often think either about increasing coverage or in terms of testing more use-cases aka behavior scenarios. Both are valid approaches to improve testing and both of them seem to focus around execution control flow (or business logic). However program behavior is sometimes controlled via the contents of its data structures and this is something which is rarely tested.

In this comment Brian C. Lane and Vratislav Podzimek from Red Hat are talking about a data structure which maps Fedora versions to particular implementations of kickstart commands. For example

class RHEL7Handler(BaseHandler):
    version = RHEL7

    commandMap = {
        "auth": commands.authconfig.FC3_Authconfig,
        "authconfig": commands.authconfig.FC3_Authconfig,
        "autopart": commands.autopart.F20_AutoPart,
        "autostep": commands.autostep.FC3_AutoStep,
        "bootloader": commands.bootloader.RHEL7_Bootloader,
    }

In their particular case the Fedora 21 logvol implementation introduced the --profile parameter but in Fedora 22 and Fedora 23 the logvol command mapped to the Fedora 20 implementation and the --profile parameter wasn't available. This is unexpected change in program behavior although the logvol.py and handlers/f22.py files have 99% and 100% code coverage.

This morning I did some coding and created an automated test for this problem. The test iterates over all command maps. For each command in the map (that is data structure member) we load the module which provides all possible implementations for that command. In the loaded module we search for implementations which have newer versions than what is in the map, but not newer than the current Fedora version under test. With a little bit of pruning the current list of offenses is

ERROR: In `handlers/devel.py` the "fcoe" command maps to "F13_Fcoe" while in
`pykickstart.commands.fcoe` there is newer implementation: "RHEL7_Fcoe".

ERROR: In `handlers/devel.py` "FcoeData" maps to "F13_FcoeData" while in
`pykickstart.commands.fcoe` there is newer implementation: "RHEL7_FcoeData".

ERROR: In `handlers/devel.py` the "user" command maps to "F19_User" while in
`pykickstart.commands.user` there is newer implementation: "F24_User".

ERROR: In `handlers/f24.py` the "user" command maps to "F19_User" while in
`pykickstart.commands.user` there is newer implementation: "F24_User".

ERROR: In `handlers/f22.py` the "logvol" command maps to "F20_LogVol" while in
`pykickstart.commands.logvol` there is newer implementation: "F21_LogVol".

ERROR: In `handlers/f22.py` "LogVolData" maps to "F20_LogVolData" while in
`pykickstart.commands.logvol` there is newer implementation: "F21_LogVolData".

ERROR: In `handlers/f18.py` the "network" command maps to "F16_Network" while in
`pykickstart.commands.network` there is newer implementation: "F18_Network".

The first two are possibly false negatives or related to the naming conventions used in this module. However the rest appear to be legitimate problems. The user command has introduced the --groups parameter in Fedora 24 (devel is Fedora 25 currently) but the parser will fail to recognize this parameter. The logvol problem is recognized as well since it was never patched. And the Fedora 18 network command implements a new property called hostname which has probably never been available to be used.

You can follow my current work in PR #91 and happy testing your data structures.

There are comments.

Don't Upgrade Galaxy S5 to Android 6.0

Samsung is shipping out buggy software like a boss, no doubt about it. I've written a bit about their bugs previously. However I didn't expect them to release Android 6.0.1 and render my Galaxy S5 completely useless with respect to the feature I use the most.

Lockscreen

Tell me the weather for Brussels

So on Monday I've let Android upgrade to 6.0.1 to be completely surprised that the lockscreen shows the weather report for Brussels, while I'm based in Sofia. I've checked AccuWeather (I did go to Brussels earlier this year) but it displayed only Sofia and Thessaloniki. To get rid of this widget go to Settings -> Lockscreen -> Additional information and turn it off!

I think this weather report comes from GPS/Location based data, which I have turned off by default but did use a while back ago. After turning the widget off and back on it didn't appear on the lockscreen. I suspect they fall back to showing the last good known value when data is missing instead of handling the error properly.

Apps are gone

Some of my installed apps are missing now. So far I've noticed that the Gallery and S Health icons have disappeared from my homescreen. I think S Health came from Samsung's app store but still they shouldn't have removed it silently. Now I wonder what happened to my data.

I don't see why Gallery was removed though. The only way to view pictures is to use the camera app preview functionality which is kind of grose.

Grayscale in powersafe mode is gone

The killer feature on these higher end Galaxy devices is the Powersafe mode and Ultra Powersafe mode. I use them a lot and by default have my phone in Powersafe mode with grayscale colors enabled. It is easier on the eyes and also safes your battery.

NOTE: grayscale colors don't affect some displays but these devices use AMOLED screens which need different amounts of power to display different colors. More black means less power required!

After the upgrade grayscale is no more. There's not even an on/off switch. I've managed to find a workaround though. First you need to enable developer mode by tapping 7 times on About device -> Build number. Then go to Settings -> Developer options, look for the Hardware Accelerated Rendering section and select Simulate Color Space -> Monochromacy! This is a bit ugly hack and doesn't have the convenience of turning colors on/off by tapping the quick Powersafe mode button at the top of the screen!

It looks like Samsung didn't think this upgrade well enough or didn't test it well enough ? In my line of work (installation and upgrade testing) I've rarely seen such a big blunder. Thanks for reading and happy testing!

There are comments.

How To Hire Software Testers, Pt. 3

In previous posts (links below) I have described my process of interviewing QA candidates. Today I'm quoting an excerpt from the book Mission: My IT career(Bulgarian only) by Ivaylo Hristov, one of Komfo's co-founders.

Fedora pen

He writes

Probably the most important personal trait of a QA engineer is to
be able to think outside given boundaries and prejudices
(about software that is). When necessary to be non-conventional and
apply different approaches to the problems being solved. This will help
them find defect which nobody else will notice.

Most often errors/mistakes in software development are made due to
wrong expectations or wrong assumptions. Very often this happens because
developers hope their software will be used in one particular way
(as it was designed to) or that a particular set of data will be returned.
Thus the skill to think outside the box is the most important skill
we (as employers) are looking to find in a QA candidate. At job interviews
you can expect to be given tasks and questions which examine those skills.

How would you test a pen?

This is Ivaylo's favorite question for QA candidates. He's looking for attention to details and knowing when to stop testing. Some of the possible answers related to core functionality are

  • Does the pen write in the correct color
  • Does the color fades over time
  • Does the pen operate normally at various temperatures? What temperature intervals would you choose for testing
  • Does the pen operate normally at various atmospheric pressure
  • When writing, does the pen leave excessive ink
  • When writing, do you get a continuous line or not
  • What pressure does the user need to apply in order to write a continuous line
  • What surfaces can the pen write on? What surfaces would you test
  • Are you able to write on a piece of paper if there is something soft underneath
  • What is the maximum inclination angle at which the pen is able to write without problems
  • Does the ink dry fast
  • If we spill different liquids onto a sheet of paper, on which we had written something, does the ink stay intact or smear
  • Can you use pencil rubber to erase the ink? What else would you test
  • How long can you write before we run out of ink
  • How fat is the ink line

Then Ivaylo gives a few more non-obvious answers

  • Verify that all labels on the pen/ink cartridge are correctly spelled and how durable they are (try to erase them)
  • Strength test - what is the maximum height you can drop the pen from without breaking it
  • Verify that dimensions are correct
  • Test if the pen keeps writing after not being used for some time (how long)
  • Testing individual pen components under different temperature and atmospheric conditions
  • Verify that materials used to make the pen are safe, e.g. when you put the pen in your mouth

When should you stop ? According to the book there can be between 50 and 100 test cases for a single pen, maybe more. It's not a good sign if you stop at the first 3!

If you want to know what skills are revealed via these questions please read my other posts on the topic:

Thanks for reading and happy testing!

There are comments.

Capybara's find().click doesn't always fire onClick

Recently I've observed a strange behavior in one of the test suites I'm working with - a test which submits a web form appeared to fail at a rate between 10% and 30%. This immediately made me think there is some kind of race-condition but it turned out that Capybara's find().click method doesn't always fire the onClick event in the browser!

The test suite uses Capybara, Poltergeist and PhantomJS. The element we click on is an image, coupled to a hidden check-box underneath. When the image is clicked onClick is fired and the check-box is updated accordingly. In the failed cases the underlying check-box wasn't updated! Searching the web reveals a similar problem described by Alex Okolish so we've tried his solution:

div.find('.replacement', visible: true).trigger(:click)

How to Test

The failure behavior being somewhat flaky I've opted for running the test multiple times and see what happens when it fails. Initially I've executed the test in batches of 10 and 20 repetitions to get a feeling of how often does it fail before proceeding with debugging. Debugging was done by logging variables and state on the console and repeating multiple times. Once a possible solution was proposed we've run the tests in batches of 100 repetitions and counted how often they failed.

At the end, when Alex's solution was discovered we've repeated the testing around 1000 times just to make sure it works reliably. So far this has been working without issues!

I've spent around a week working on this together with a co-worker and we didn't really want to spend more time trying to figure out what was going wrong with our tools. Once we saw that trigger does the job we didn't continue debugging Capybara or PhantomJS.

There are comments.

DEVit Conf 2016

It's been another busy week after DEVit conf took place in Thessaloniki. Here are my impressions.

DEVit 2016

Pre-conference

TechMinistry is Thessaloniki's hacker space which is hosted at a central location, near major shopping streets. I've attended an Open Source Wednesday meeting. From the event description I thought that there was going to be a discussion about getting involved with Firefox. However that was not the case. Once people started coming in they formed organic groups and started discussing various topics on their own.

I was also shown their 3D printer which IMO is the most precise of 3D printers I've seen so far. Imagine what it would be like to click Print, sometime in the future, and have your online orders appear on your desk over night. That would be quite cool!

I've met with Christos Bacharakis, a Mozilla representative for Greece, who gave me some goodies for my students at HackBulgaria!

On Thursday I spent the day merging pull requests for MrSenko/pelican-octopress-theme and attended the DEVit Speakers dinner at Massalia. Food and drinks were very good and I even found a new recipe for mushrooms with ouzo, of which I think I had a bit too many :).

I was also told that "a full stack developer is a developer who can introduce a bug to every layer of the software stack". I can't agree more!

DEVit

The conference day started with a huge delay due to long queues for registration. The fist talk I attended, and the best one IMO was Need It Robust? Make It Fragile! by Yegor Bugayenko (watch the video). There he talked about two different approaches to writing software: fail safe vs. fail fast.

He argues that when software is designed to fail fast bugs are discovered earlier in the development cycle/software lifetime and thus are easier to fix, making the whole system more robust and more stable. On the other hand when software is designed to hide failures and tries to recover auto-magically the same problems remain hidden for longer and when they are finally discovered they are harder to fix. This is mostly due to the fact that the original error condition is hidden and manifested in a different way which makes it harder to debug.

Yegor made several examples, all of which are valid code, which he considers bad practice. For example imagine we have a function that accepts a filename as parameter:

def read_file_fail_safe(fname):
    if not os.path.exists(fname):
        return -1

    # read the file, do something else
    ...
    return bytes_read


def read_file_fail_fast(fname):
    if not os.path.exists(fname):
        raise Exception('File does not exist')

    # read the file, do something else
    return bytes_read

In the first example read_file_fail_safe returns -1 on error. The trouble is whoever is calling this method may not check for errors thus corrupting the flow of the program further down the line. You may also want to collect metrics and update your database with the number of bytes processed - this will totally skew your metrics. C programmers out there will quickly remember at least one case when they didn't check the return code for errors!

The second example read_file_fail_fast will raise an exception the moment it encounters a problem. It's not its fault that the file doesn't exist and there's nothing it can do about it, nor is its job to do anything about it. Raising an exception will surface back to the caller and they will be notified about the problem, taking appropriate actions to resolve it.

Yegor was also unhappy that many books teach fail safe and even IDEs (for Java) generate fail safe boiler-plate code (need to check this)! Indeed it is me who asks the first question Are there any tools to detect fail safe code patterns? and it turns out there aren't (for the majority of cases that is). If you happen to know such a tool please post a link in the comments below.

I was a bit disappointed by the rest of the talks. They were all high-level overviews IMO and didn't go deep technical. Last year was better. I also wanted to attend the GitHub Patchwork workshop but looking at the agenda it looked like this is for users who are starting with git and GitHub (which I'm not).

The closing session of the day was "Real time front-end alchemy, or: capturing, playing, altering and encoding video and audio streams, without servers or plugins!" by Soledad Penades from Mozilla. There she gave a demo about the latest and greatest in terms of audio and video capturing, recording and mixing natively in the browser. This is definitively very cool for apps in the audio/video space but I can also imagine an application for us software testers.

Depending on computational and memory requirements you should be able to record everything the user does in their browser (while on your website) and send it back home when they want to report an error or contact support. Definitely better than screenshots and having to go back and forth until the exact steps to reproduce are established.

There are comments.

Changing Rails consider_all_requests_local in RSpec fails

As many others I've been trying to change Rails.application.config.consider_all_requests_local and Rails.application.config.action_dispatch.show_exceptions inside my RSpec tests in order to test custom error pages in a Rails app. However this doesn't work. My code looked like this

feature 'Exceptions' do
  before do
    Rails.application.config.action_dispatch.show_exceptions = true
    Rails.application.config.consider_all_requests_local = false
  end

This works only if I execute exceptions_spec.rb alone. However when something else executes before that it fails. The config values are correctly updated but that doesn't seem to have effect.

The answer and solution comes from Henrik N.

action_dispatch.show_exceptions gets copied and cached in Rails.application.env_config, so even if you change Rails.application.config.action_dispatch.show_exceptions in this before block the value isn't what you expect when it's used in ActionDispatch::DebugExceptions.

In fact DebugExceptions uses env['action_dispatch.show_exceptions']. The correct code should look like this

before do
  method = Rails.application.method(:env_config)
  expect(Rails.application).to receive(:env_config).with(no_args) do
    method.call.merge(
      'action_dispatch.show_exceptions' => true,
      'action_dispatch.show_detailed_exceptions' => false,
      'consider_all_requests_local' => false
    )
  end
end

This allows the test to work regardless of the order of execution of spec files. I don't know why but I also had to leave show_detailed_exceptions otherwise I wasn't getting the desired results.

There are comments.

Mismatch in Pyparted Interfaces

Last week my co-worker Marek Hruscak, from Red Hat, found an interesting case of mismatch between the two interfaces provided by pyparted. In this article I'm going to give an example, using simplified code and explain what is happening. From pyparted's documentation we learn the following

pyparted is a set of native Python bindings for libparted. libparted is the library portion of the GNU parted project. With pyparted, you can write applications that interact with disk partition tables and filesystems.

The Python bindings are implemented in two layers. Since libparted itself is written in C without any real implementation of objects, a simple 1:1 mapping of externally accessible libparted functions was written. This mapping is provided in the _ped Python module. You can use that module if you want to, but it's really just meant for the larger parted module.

_ped       libparted Python bindings, direct 1:1: function mapping
parted     Native Python code building on _ped, complete with classes,
           exceptions, and advanced functionality.

The two interfaces are the _ped and parted modules. As a user I expect them to behave exactly the same but they don't. For example some partition properties are read-only in libparted and _ped but not in parted. This is the mismatch I'm talking about.

Consider the following tests (also available on GitHub)

diff --git a/tests/baseclass.py b/tests/baseclass.py
index 4f48b87..30ffc11 100644
--- a/tests/baseclass.py
+++ b/tests/baseclass.py
@@ -168,6 +168,12 @@ class RequiresPartition(RequiresDisk):
         self._part = _ped.Partition(disk=self._disk, type=_ped.PARTITION_NORMAL,
         self._part = _ped.Partition(disk=self._disk, type=_ped.PARTITION_NORMAL,
                                     start=0, end=100, fs_type=_ped.file_system_type_get("ext2"))
 
+        geom = parted.Geometry(self.device, start=100, length=100)
+        fs = parted.FileSystem(type='ext2', geometry=geom)
+        self.part = parted.Partition(disk=self.disk, type=parted.PARTITION_NORMAL,
+                                    geometry=geom, fs=fs)
+
+
 # Base class for any test case that requires a hash table of all
 # _ped.DiskType objects available
 class RequiresDiskTypes(unittest.TestCase):
diff --git a/tests/test__ped_partition.py b/tests/test__ped_partition.py
index 7ef049a..26449b4 100755
--- a/tests/test__ped_partition.py
+++ b/tests/test__ped_partition.py
@@ -62,8 +62,10 @@ class PartitionGetSetTestCase(RequiresPartition):
         self.assertRaises(exn, setattr, self._part, "num", 1)
         self.assertRaises(exn, setattr, self._part, "fs_type",
             _ped.file_system_type_get("fat32"))
-        self.assertRaises(exn, setattr, self._part, "geom",
-                                     _ped.Geometry(self._device, 10, 20))
+        with self.assertRaises((AttributeError, TypeError)):
+#            setattr(self._part, "geom", _ped.Geometry(self._device, 10, 20))
+            self._part.geom = _ped.Geometry(self._device, 10, 20)
+
         self.assertRaises(exn, setattr, self._part, "disk", self._disk)
 
         # Check that values have the right type.
diff --git a/tests/test_parted_partition.py b/tests/test_parted_partition.py
index 0a406a0..8d8d0fd 100755
--- a/tests/test_parted_partition.py
+++ b/tests/test_parted_partition.py
@@ -23,7 +23,7 @@
 import parted
 import unittest
 
-from tests.baseclass import RequiresDisk
+from tests.baseclass import RequiresDisk, RequiresPartition
 
 # One class per method, multiple tests per class.  For these simple methods,
 # that seems like good organization.  More complicated methods may require
@@ -34,11 +34,11 @@ class PartitionNewTestCase(unittest.TestCase):
         # TODO
         self.fail("Unimplemented test case.")
 
-@unittest.skip("Unimplemented test case.")
-class PartitionGetSetTestCase(unittest.TestCase):
+class PartitionGetSetTestCase(RequiresPartition):
     def runTest(self):
-        # TODO
-        self.fail("Unimplemented test case.")
+        with self.assertRaises((AttributeError, TypeError)):
+            #setattr(self.part, "geometry", parted.Geometry(self.device, start=10, length=20))
+            self.part.geometry = parted.Geometry(self.device, start=10, length=20)
 
 @unittest.skip("Unimplemented test case.")
 class PartitionGetFlagTestCase(unittest.TestCase):

The test in test__ped_partition.py works without problems, I've modified it for visual reference only. This was also the inspiration behind the test in test_parted_partition.py. However the second test fails with

======================================================================
FAIL: runTest (tests.test_parted_partition.PartitionGetSetTestCase)
----------------------------------------------------------------------
Traceback (most recent call last):
  File "/tmp/pyparted/tests/test_parted_partition.py", line 41, in runTest
    self.part.geometry = parted.Geometry(self.device, start=10, length=20)
AssertionError: (<type 'exceptions.AttributeError'>, <type 'exceptions.TypeError'>) not raised

----------------------------------------------------------------------

Now it's clear that something isn't quite the same between the two interfaces. If we look at src/parted/partition.py we see the following snippet

137     fileSystem = property(lambda s: s._fileSystem, lambda s, v: setattr(s, "_fileSystem", v))
138     geometry = property(lambda s: s._geometry, lambda s, v: setattr(s, "_geometry", v))
139     system = property(lambda s: s.__writeOnly("system"), lambda s, v: s.__partition.set_system(v))
140     type = property(lambda s: s.__partition.type, lambda s, v: setattr(s.__partition, "type", v))

The geometry property is indeed read-write but the system property is write-only. git blame leads us to the interesting commit 2fc0ee2b, which changes definitions for quite a few properties and removes the _readOnly method which raises an exception. Even more interesting is the fact that the Partition.geometry property hasn't been changed. If you look closer you will notice that the deleted definition and the new one are exactly the same. Looks like the problem existed even before this change.

Digging down even further we find commit 7599aa1 which is the very first implementation of the parted module. There you can see the _readOnly method and some properties like path and disk correctly marked as such but geometry isn't.

Shortly after this commit the first test was added (4b9de0e) and a bit later the second, empty test class, was added (c85a5e6). This only goes to show that every piece of software needs appropriate QA coverage, which pyparted was kind of lacking (and I'm trying to change that).

The reason this bug went unnoticed for so long is the limited exposure of pyparted. To my knowledge anaconda, the Fedora installer is its biggest (if not single) consumer and maybe it uses only the _ped interface (I didn't check) or it doesn't try to do silly things like setting a value to a read-only property.

The lesson from this story is to test all of your interfaces and also make sure they are behaving in exactly the same manner!

Thanks for reading and happy testing!

There are comments.

Capybara's within() Altering expect(page) Scope

When making assertions inside a within block the assertion scope is limited to the element selected by the within() function, although it looks like you are asserting on the entire page!

scenario 'Pressing Escape closes autocomplete popup' do
  within('#new-broadcast') do
    find('#broadcast_field').set('Hello ')
      start_typing_name('#broadcast_field', '@Bret')
      # will fail below
      expect(page).to have_selector('.ui-autocomplete')
      send_keys('#broadcast_field', :escape)
  end
  expect(page).to have_no_selector('.ui-autocomplete')
end

The above code failed the first expect() and it took me some time before I figured it out. Capybara's test suite itself gives you the answer

it "should assert content in the given scope" do
  @session.within(:css, "#for_foo") do
    expect(@session).not_to have_content('First Name')
  end
  expect(@session).to have_content('First Name')
end

So know your frameworks and happy testing.

There are comments.

Unix Stickers for Your Laptop

Last month I was asked to review stickers from UnixStickers. In return I would receive some of them. I've made them a counter offer - they send me stickers and I give them to students attending my QA-and-Automation-101 course.

Unix Stickers

Yesterday I gave away everything I was sent, some of which you can see on the picture above. All stickers were gone in minutes. The ones that were left were the yellow JS ones and the Fedora infinity logo. It turned out most students are not familiar with Fedora but otherwise liked the stickers.

If you haven't come across UnixStickers until now I definitely recommend it. It is a great source to purchase stickers, mugs and T-shirts branded with your favorite open source project(s). In return some of the money is donated back to the community to support their open source work. A great business model in my opinion.

There are comments.

3 Bugs in Grajdanite

Grajdanite is a social app that allows everyone (in Bulgaria) to photograph vehicles in breach of traffic rules or misbehaving drivers, upload them online and ask them to appologize. They also offer some functionality to report offenses to the authorities are are partnering with local municipalities and law enforcement agencies to make the process easier. And of course this is one of my favorite apps as of latest.

Missing Icon in My Profile

Missing icon

The more offenses you report the more points you get. Points lead to ranks (e.g. junior officer, senior officer, etc). The page showing your points and rank is missing an icon. If I had to guess this is the badge which comes with different ranks.

Preloading the Very First Form Value

Preloading gone wrong

Once you opt for reporting an offence to the authorities you need to specify the address where the action took place, your name, phone and e-mail address. The app correctly saves your details and pre-loads them later to speed-up data entry. However I typed my e-mail wrong the very first time. Now every time I want to report something the app pre-loads the wrong address. Even after I change it to the correct one, the next time I still see the very first, wrong value.

In code this is probably something like:

# pre-load
form.email = store.get("email", "")
form.show()

# save
if form.firstTime():
    store.save("email", form.email)

The fix is to save the form value every time (not expensive operation here) or check if the current value is different from the last time and only then save it.

DST and Time Sync

The last bug is in the app confirmation email. Once an offence is reported the user receives an email with the uploaded photo and the information they have provided. The email includes a timestamp. However the email timestamp is 1 hour off from the actual time. In particular it is 1 hour behind the current time and I think the email server doesn't follow summer time.

The result from this is:

  • Report an offense
  • Wait 1 minute for the email to be received;
  • The email says the offense happened 1 hour ago!

All of these bugs are in version 3.86.3, which is the latest one.

There are comments.


Page 2 / 14