A few comments in no particular order:
Gerrit doc also specifies that "By default, a change is submittable when it gets at least one highest vote on each label and has no lowest vote (aka veto vote) on any label". We could create a new label for SonarQube to not get the signal mixed up between our regular builds and the SonarQube analysis. And we could encode higher level logic as custom submit rules.
In the end, we already allow a human to override the tools and tag a build as +2 verified. I don't think we should encourage that, but it is possible. And I think it is a good practice to make it clear that humans are smarter than tools.
In the short term, getting good feedback from SonarQube, in the form of textual comments, which are clear about why a quality gate is failing is much more important than having a vote. In the longer term, we should trust the SonarQube analysis enough to have an active vote both to promote or block a CR from being merged.
If we get a lot of failed quality gates (because of insufficient code coverage or because of other rules), we should review our rules and threshold. Or we should make it clear that we we believe that our current rules are the right one and make sure we improve our code to meet the standard. Having a standard of quality which breaks often means that the standard does not agree with the reality. We need to change either the standard, or the reality.
I think that at the moment, we don't have enough confidence in the Quality Gate result, and a less aggressive 0/+1, or 0/0 strategy make sense. In the long term, I think we should aim to have SonarQube vote -1/+1.