-
Notifications
You must be signed in to change notification settings - Fork 664
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
QuantifiedCode makes PRs have unsuccessful checks #583
Comments
Sounds like a sensible approach. There's also a wiki page on quantifiedcode. I am not sure if I ever was successful in disabling a test (I think I tried it for the "non pythonic methods names"). The ones I would disable:
As you suggest, it would be nice if one could tell QC which tests should contribute to a "QC failure" and which ones should be treated more as guidelines than as rules. Tangential: Have you been able to have QC/cody submit a PR to us? I haven't. |
I'll summarize here the exchange I've just had with @programmdesign, one of QC's developers: It's not possible to have the checks 'pass' or 'fail' based on some severity threshold. The recommended way to deal with unwanted fails is indeed to disable the checks we don't like. I haven't had the time to do that properly yet, but I let them know that @orbeckst was having trouble disabling some tests. If problems persist there might be a bug in QC there. Given this, let me know if you'd rather disable the PR checking, at least while we fine-tune the analysis (so that we no longer have unsuccessful checks). Cody seems to work intermittently. I issued a fix when I was on the phone with QC, and -- of course -- then it happened to work cleanly, and PR #586 was submitted almost immediately. @programmdesign mentioned there might be some queuing issues during busier times (I wouldn't really mind if the PR takes half an hour to show up, but it either shows up immediately or never). Again, I'll keep in touch with them if we see that this inconsistency remains. On @kain88-de 's problem getting his fork to be checked, I was told that they're looking into that (maybe PRs from forked repos isn't a very common usage pattern in their user base, but I certainly wouldn't want to discourage @kain88-de from working like that). We might also expect some inconsistency on checks of PRs opened before QC analysis was set up (which might also explain @tylerjereddy 's observation that code checks in #580 failed on code he didn't touch). |
@mnmelo: Just to clarify: you can have a 'severity' threshold by just turning e.g. all "Recommendations" of. You can within a minute in you projects settings. We've decided to go this way, as you are in full control of what shows up in you Pull Request status. Let me know if this doesn't work for. Should be working fine though. |
@programmdesign: yes, but we might want to keep some less severe checks, only not be so visually alarmed by them. |
Ok. Just to confirm: You are saying you want to see all the issue on QC, but you'd like to disable all checks of a certain severity for PR statues only? |
@mnmelo I think my problem was only that I opened the PR before we had QC controlls. In my new PR's it works. I think it worked with the other old PR's because the push to the MDAnalysis repo triggered the QC-check. So for new PR's everything seems to be fine. |
@programmdesign: yes, you got the idea right. Finally, stating in the status message how many issues were found would be very useful (maybe discriminating by severity, or between 'serious' and 'non-serious' type). But this is my interpretation. Guys, do give your feedback on how the ideal PR check would look like. That's good news, @kain88-de, all is well, then. |
@mnmelo , you summed up my view nicely. |
@programmdesign , it would actually be nice if quantifiedcode could submit issues for problems that Cody cannot fix (instead of a PR). At the moment, I have to do this manually, see e.g. #564. For that to work nicely, I would also like to have a link that goes directly to the QC issue (at the moment I have to get a link to one of the flagged files). EDIT: removed link to #589 example issue, which Cody can fix (I don't have to raise the issue manually)– I just wasn't logged into QC and therefore the "fix it" link wasn't there... |
I suggest we keep quantfiedcode but for right now, disable all the checks that we (mostly) don't care about. For me this would be
Any other suggestions for the banned list? Rationale: I want to see a green checkmark if the PR passes Travis.... |
Yeah I've already disabled a few that I thought were unnecessary, so I can't really object |
I would like to keep the pythonic naming. It is useful for new code and doesn't pop up that much for changes in existing code. |
…ed qc's capabilities at top
I dont't think that this is still an issue for us; I haven't really heard any complaints about QC recently. I am closing and if you disagree please re-open (or ask for it to be re-opened). |
With the recent integration of quantifiedcode PRs started being labeled with unsuccessful checks. QC tests the diff of
develop
to the branch being merged and reports 'bad code' in those parts only. Whenever a code issue is found the PR is labeled with an unsuccessful check.The problem(s):
Most of the checks boil down to guidelines we don't want to follow. Having a PR look 'unsuccessful' because of that is, well, ugly (#563, #580).
In addition, @tylerjereddy opened a PR (#580) where QC complained about code issues in parts of code that hadn't been touched. I'm assuming this is a QC bug and will bring it up with them.
What can be done?
The QC tests are fairly customizable. We can disable as many as we want, individually. I'd suggest that every time we bump into a test we don't agree with we disable it. We'll end up with a selected set of tests that are important to us, and then an 'unsuccessful check' will be much more meaningful. (admins of the QC MDAnalysis project should be able to disable tests directly on the issue list).
Still, the above solution won't be perfect. After all, we're talking about coding guidelines, not absolutely required syntax. There will be times when we will want to break rules. In such cases we'll always have that annoying unsuccessful check (and possibly a discussion in the PR on why the guideline wasn't followed).
In other cases it might happen that a check covers too much: both cases we want to avoid and to keep. We might then think about disabling that test, and rolling our own similar test adapted to our style.
I can also look into having QC report something more neutral than 'unsuccessful', or see if it's possible to have the success depend on the severity of the found issues (there's 4 levels, from 'Recommendation' to 'Critical').
Finally, if you think this isn't worth the hassle we can stop QC from giving feedback on PRs. Reviewers could then go to quantifiedcode.com to check the issues that were found (an analysis is triggered on every commit).
I vote that for the upcoming PRs we take the time to prune the tests we don't want, knowing that we'll have plenty of unsuccessfully checked PRs (a discussion about some checks might be useful; maybe we can do it in this issue's conversation?). If we see it's leading nowhere we can disable PR feedback. In the meantime I can try to silence the unsuccessful labels per the ideas above.
What say you?
The text was updated successfully, but these errors were encountered: