Tester Feedback

Use this form to (publicly) submit your feedback and feature ideas. Others will be able to vote for and discuss your idea.
For testers lvl4,5 double review for “Not a Bug (intentional behavior) and “Other” rejections
Hello TestIO Team, According to TestIO Academy’s rejection reasons, the categories “Not a Bug” and “Other” are especially risky. These reasons often depend on subjective judgment — for example, a team lead may have one opinion, while several experienced testers see the issue differently. Example cases: 2814509, 2814496. It’s clear that new testers sometimes submit weak or irrelevant bugs — that’s part of the learning process. However, for experienced testers (Level 4–5) there should be a conflict-free review system for such cases. When a tester with proven experience gets a rejection like “Not a Bug” or “Other,” the decision should not rely solely on one team lead’s personal interpretation. Team leads, being experienced professionals, naturally have their own perspective. But sometimes this leads to situations where decisions feel one-sided — as we say back home, “I’m the boss, you’re the fool.” This kind of approach demotivates testers and can damage communication, especially when rejections feel emotional rather than analytical. To make the process fairer and more transparent, bugs rejected as “Not a Bug” or “Other” should: • undergo a double review (e.g. by another team lead or senior QA), or • be escalated to the client for the final decision, with the possibility for testers to challenge such decisions without opening a dispute and without payment for tester until final approval or rejection in the second round . But only for testers lvl4 and 5 and “Not a Bug (intentional behavior) and “Other” reasons. Disputes are stressful and often create cold conflicts between testers and team leads, which harms motivation and teamwork. Adding a neutral review layer for these subjective rejection categories — especially for experienced testers — would make the platform fairer, more motivating, and more professional for everyone. Thank you for your review! Best regards, Aleksandra
2
Can the bug assessment sheet be revised to cover more scenarios?
It is sometimes very confusing when the exact same bug behavior is accepted in one test and rejected in another test. I say this because I had a very recent experience of this. For instance, a case where the TL never mentioned certain exclusions in the chat but goes ahead to reject bugs (e.g. bugs reported on credit card encryption issues or bugs reproducible only on specific device or browser OS) If some TLs cannot be clear on these kind of exclusions, then Test IO should be clear on such disputable issues (e.g. on reproducibility, I have also seen bugs where none other tester could reproduce positively, still get forwarded to customer because it was OS or device specific). Up till this time, it’s still unclear why Test IO is permitting such inconsistencies on bug rejection reasons from some TLs even after a bug dispute (especially when exact same bugs have been approved in previous tests) In such cases, the TL position should not be agreed-to in a bug dispute (especially when such exclusions were not made clear in the test scope, chat or Test IO academy but are valid bugs). If Test IO did not state in the academy that certain bug scenarios be not reported anymore, then ALL TLs should ALWAYS state test-specific exclusions (not stated in out-of-scope section) in their opening comments via chat (e.g. in this test TL approves card encryption caching bug, in another test TL reject exact same card encryption caching issue). This is just one example. I could go on-and-on with other examples. I would really appreciate Test IO to look into such disputable issues and to revise the bug assessment sheet to provide clarity and classifications for such bugs.
1
·
under review