Displaying JUnit Test Reports on GitLab

In this blog post I wanted to look at something interesting that I found a while back this semester when we were learning about JUnit testing and combining this with GitLab. In class we learned how to use Gradle to build our Java programs and to run JUnit tests and get GitLab to run them when we pushed our code to GitLab using GitLab’s continuous integration feature. As GitLab’s documentation says, using Gradle and JUnit on Gitlab will show whether the tests fail on GitLab, but I wanted to take this a step further and looked into seeing if it was possible to display more information about JUnit test statuses on GitLab itself. This led me to finding this article in GitLab’s documentation about a feature in GitLab that can display JUnit test reports on merge requests. As this documentation shows this can easily be enabled on any GitLab project that already is a Java project with GitLab’s CI using Gradle to run JUnit tests. All you need to do is add the necessary lines in the GitLab CI/CD config file provided in this document to display the tests reports in a merge request.

I already tried this before a couple of months ago when I first found it, but I wanted a nice demonstration of this feature, so I created a little Java test program that has a basic Student class with a JUnit test class. I then converted this to a Gradle project using some previous programs we have used as examples and instructions provided from this class and then pushed this to a new GitLab project. On the master branch of this project the tests pass as indicated by the previous GitLab CI job. After getting the initial code pushed and passing the tests, I then created a testing branch where I changed the code in the Student class so that one of the tests (testSetLastName) would deliberately fail. Creating a merge request on GitLab for this branch and pushing this “broken” code results in the test failing when it runs on GitLab with Gradle and therefore GitLab displays on the merge request which JUnit test(s) failed:

blog-post-1-screenshot.PNG

I found this little feature to be pretty awesome in combining software testing along with software management tools and can easily see how this would be very useful for checking if new or modified code in a project causes tests to fail. In addition to checking if the tests pass, this feature allows us to easily see which tests fail, and directly on the merge request itself, instead of the alternative that the documentation says of looking through reports possibly containing thousands of lines for the failed test. I will definitely be implementing this on any projects I’m working on that use JUnit tests and are hosted on GitLab.

Link to article: https://docs.gitlab.com/ee/ci/junit_test_reports.html

Link to demo program: https://gitlab.com/cradkowski/gitlab-junit-test-reports-demo

The Workflow in Action

Last week started on Monday with getting back on track with this project. I looked at some of the issue updates on GitHub including the issue about switching to Discord. I then emailed Dr. Wurst asking about how the DCO sign-off works for committing to the project, hoping that I could push my setup documents and diagrams before the next day’s LFP committee meeting. To my surprise, the GitLab Gold issue had been fixed and the WSU account now has access to all of the Gold features again. I was really happy about this since it means I could go back to testing the advanced features offered with this package. Sadly, the GitHub issue still remains but as of Monday they hadn’t deleted my testing accounts yet. After that, I looked at the Probot question one last time and created my reply to Dr. Jackson about this. Finally, I looked at Dr. Wurst’s earlier question about free server time for open source projects. I couldn’t seem to find much information directly about if this was available, at least with Amazon Web Services or Google.

Tuesday started with a research meeting with Dr. Wurst. We covered a lot in this meeting, but the biggest thing was separating out the old issue card and breaking it into multiple new issues, as the old one was starting to get too big and congested. Dr. Wurst then closed the old issue for good. The new issues included:

By doing this it makes it easier to choose one task and finish it and see the whole progress as it moves across the project board on GitHub. 

After the meeting, I read the contributing document to see how I am supposed to be making additions to this project before committing and pushing to the repository. I then updated my local repository for the shop setup documents and re-exported all of the graph files. I then looked at the first pull request for this project and approved it and merged it into the master branch. Dr. Jackson decided that we should start using our own workflow when making additions to the LFP projects now, so I created a new branch for my setup documents, signed the previous commits with the DCO and then push the changes to GitHub. I then created a pull request and requested it to be reviewed by Dr. Jackson. 

Wednesday started with reading issue updates. I then looked to see if there was a way to always sign commits without having to add the -s parameter each time. I couldn’t find what I wanted to for this. I then started working on revising the documentation according to Dr. Jackson’s review comments. I pushed my changes to the branch. Finally, I started looking at what to do for creating workflow documentation. I discovered that Dr. Jackson had already wrote a lot that covered this already and asked on this issue card what more I can add to it. 

Thursday, I found out that the links in the setup documentation for the diagrams had broken when merged into the master branch. I fixed these image links for the setup diagrams by making them relative as Dr. Jackson suggested. I then started figuring out the answer to Dr. Jackson’s question about how labels work on GitLab issues and issue boards. With this I figured out how the different labels work in GitLab. This included group labels that allow a group level issue board to control issues with projects underneath the group. In response to the original question about how the two boards had a “doing” column I finally came to the conclusion that the original hand-off situation Dr. Jackson was asking about must have had two different column names such as “Team1 doing” and “Team2 doing” as if both boards had the same column name, moving an issue on one board automatically moves it on the other. I then updated the issue card with my response. 

Friday, I updated the platform comparison feature sheet with some of the things I’ve discovered since using the platform and closed Dr. Jackson’s comment about WIP merge requests. I then posted the link to this Google Sheet on the issue card on the community board about GitHub vs. GitLab. I then reviewed and approved some pull requests for Dr. Jackson. I then started looking at the documenting continuous integration issue and looked at how to enable CI for a GitHub project specifically using Travis as it seems to be the most popular CI marketplace app on GitHub. I actually found this really easy to do, only needing to set the language in the config file to Java for an example project I had. Finally, I reviewed another pull request for Dr. Jackson and added some suggested edits before approving, especially fixing broken document links. 

Saturday, I approved and merged some more pull requests and continued looking at how to get the DCO bot to work on a GitHub repository in order to double check the instructions Dr. Jackson posted for how to do this in our documentation. 

Sunday, I looked at if GitLab has its own version of a DCO bot. I found through different pages that using GitLab’s Push Rules for commits you can create a rule that enforces all commits to be signed according to a regular expression. I then had to figure out the correct regular expression that checks that all commits have a line matching the form:

Signed-off-by: ‘firstName lastName’ <username@domain>

After reading through a couple of tutorial websites and finding this great one that lets you test in real-time your expression against a string I finally figured out the correct expression to be:

Signed-off-by: \w+ \w+ <.+@.+\..+>

I then tested this on the GitLab web UI to make sure it works, and it did. I then tested it with locally with GitBash and also with branches. I found it works a little differently than the DCO bot on GitHub and blocks all commits from being pushed unless they have the included signature. I actually like this as it prevents the commits from even getting pushed to a project without including the sign-off. The downside is that this sign-off is needed on merge commits too. I then updated the DCO documentation to include instructions for enabling this on GitLab and added some comments on the pull request about this. Finally, I created the high-level continuous integration document, added a definition for it and an overview of how to enable it on GitHub and GitLab. I pushed this and created a pull request for this.

In retrospective after this week, I found that I enjoyed the workflow we are using with selecting an issue to work on, creating a branch, pushing the work, then asking for it to be reviewed by someone else before merging. I especially like the reviewing part as you can always have a second opinion before posting the work you have done to the master branch.

The Decorator Design Pattern

This week I will be looking at the decorator design pattern. This post discusses the advantages of using a decorator design pattern over inheritance in Java and gives examples of its implementation. The main advantage being a significantly less number of classes are needed when adding functionality using a decorator versus using traditional inheritance. It also mentions that the decorator should be added so that it is natively supported by programming languages. I think that this is the most interesting part of the entire article. I agree that languages, and in particular Java should start natively adding support for these design patterns. By doing this, as the article says it makes it much easier for beginner programmers to implement. I agree with this and I find that this can be challenging when you are first learning about design patterns and especially if you are starting out programming. In addition, I agree that a native implementation makes this design pattern, and others much cleaner to implement. As the example of the implementation in this article shows it would require a lot less work and code if it was implemented as they have it with just having a decorates statements instead of implements when creating a new class. I find this approach would encourage me and others to use this as it would be easy to use and not require repetitive typing, which as the article states would lead to better programs. I think this point of native support should also go beyond just this particular design pattern. I think that Java and other object-oriented languages should add native support for some of the other popular design patterns as well such as strategy or singleton. This would make it easier to experiment with different patterns earlier in the stages of learning to program and would let people see the benefits without struggling on the implementation necessarily. I would have found it much more encouraging to use one of these patterns as the article suggests if it was natively supported versus creating my own implementation based on text-book examples. Overall this article and what I’ve learned in this class have taught me to prefer using a design pattern over basic inheritance in most scenarios when building programs and that adding native support for these patterns would make it even better to use.

Source: https://dzone.com/articles/is-inheritance-dead