IMO these are basically the same thing or at least dovetail into one another. I propose merging these subsections in some logical way and referring to them as a Security Concept Review. This would hopefully avoid confusion while reducing our service offerings from 3 to 2.
Talk:Security/SOP/Security Readiness Reviews
Jump to navigation Jump to search
Reply to "Add a "code freeze" or ready for deployment clause"
Merge Concept and Design review processes
Add a "code freeze" or ready for deployment clause
Under the Expectations and/or Review process sections, we should explicitly call out that code submitted for security readiness reviews MUST be in a completed state. That is, "frozen", with no development (or extremely minor changes only) prior to deployment.
How long would the code freeze last? Who would decide what is "minor"? Maybe there is a better way to express the intent behind this proposed rule? I worry that this being an explicit rule may be seen as a signal to avoid the need for a security review by feature teams which in turn leads to more kitchen sink extensions and services which allow folks to bypass the freeze.
Theoretically the freeze would last for the duration of a standard security review. I'd like to believe that could be a week or two, once the review was assigned, but the reality is that it might be closer to three or four weeks (or the 30 day period established within the policy). The issue here is that it's not an effective use of time and resources to review a moving target. If the code is not close to a deployment-ready state (and yes, that requires some minimal, workable definition, to be defined within this policy document) then we really cannot guarantee the quality or timeliness of the review (the latter is already a struggle.) I think we're open to suggestions here (I am, personally) but there needs to be some kind of consensus on reasonable expectations for both requesters and reviewers.
Disclaimer: I am not a Product Manager, so my interpretation of Product Management might be completely off.
I don't want to sound like a jerk here, I completely understand and can relate to Security folks being asked to review the moving target, and with them being serious and professional about what they do, them not willing to "just" review a snapshot of code of point X in time, while what might be getting shipped to production might be something completely different. I have a feeling though that, apart from resourcing part on which @BDavis (WMF) talked below, that the duration of the security readiness review is also something which might seem problematic for people "owning" the thing which is subject to review. 30 days is not a short period of time, and I could imagine if it was a week, Product Manager would be more easy to somehow incorporate this time in their product timeline, whereas 4 weeks might seem like more of a burden, or extra hurdle, and that might be a reason why the reviews are scheduled with the state of the code being less then ready, or multiple products are squeezed into one, to "avoid" the security review. From security perspective it does not seem like a smart strategy, I don't think though it is stemming from Product people ignoring or not being aware of the importance of security. They definitely get it, but what they primarily want to see is their Product going live.
I am not suggesting the time of the review should be now immediately shortened. This would clearly not lead to anything good. I do hope though that some "middle ground" could be found in some mid-term future. I guess that would need a discussion between people building things (Product too), and Security team.
From a pragmatic point of view I understand the need for a stable review point. From the practical point of view of an engineering team lead or product manager, needing to have the codebase sit idle for 2-4 weeks while under review is potentially problematic. The problem I see from this vantage is what will the engineering team do in the undefined interval? Move on to other work? That's dangerous from a resourcing point of view; in my personal experience at the Foundation once an engineer moves to working on another project it is unlikely that they will ever return.
When I developed Striker (last thing I did that underwent a security review) the reviewer and I picked a stable point in the git history for them to focus the review on. We then kept track of additional commits that were merged into the tree after that point and before the initial prod deployment. These were reviewed as part of the review of the fixes I made in response to the initial security review. This let the reviewer have a stable version that was something close to 90% feature complete to work on and still gave me the ability to continue development while waiting for the security review feedback.
Ultimately the process is up to the Security team, but the more hurdles it includes the more people are likely to find ways around the process. People are likely to try very hard to shoehorn new features into existing Extensions and services which do not require pre-deploy security review rather than making smaller, more cohesive Extensions and services if there are significant time and energy investments needed to qualify for a new deployment.
FWIW, for me the main thing is basically that its at a viable product stage.
Like if someone asks for a review of an extension related to editing, and the editing part isn't done yet, that's a problem (Things like this has happened in the past).
At least personally, I don't mind if a normal level of development is ongoing, as long as its in a state where it wouldn't be crazy to deploy that version (feature complet-ish) and they aren't rewriting it from the ground up in the next commit.
Instead of a code freeze, how about introducing some triggers that would either halt or delay a security review? Thinking along the lines of:
1) If 80% or fewer features have been implemented on code submitted for review according to specification documents, etc. This obviously places an onus on the code owners to provide such documentation/statistics when asked. And it can easily fall into more of a judgment call if we're not careful.
2) If the code submitted for review has more than X number of commits per day/per week/etc. on its relevant development branch during the review process then the code would be considered under active development and the review would be delayed.
3) If the deployment date is changed (in either direction) the estimated review delivery date would be rescheduled.
4) A security review will only be considered valid for commits A..Z as indicated within the security review request. If X number/volume of additional features are introduced during subsequent development cycles, a new security review request would need to be created based upon the expected deployment date of these additional features.
Out of curiosity, would such halting of the review would even work for you Security people? I mean I think about the situation when I request a review when the code is in the state X. Security engineer then starts a review. Meanwhile the development on the code continues, and in the middle of the review, with the security person having spent already a week or two, the halt point is reached because the code has changed entirely. That would mean the review process goes back to the start again, and the security engineer basically re-starts. It is not necessarily guaranteed that the review attempt 2 finishes, because the development is still ongoing, and the code can still change significantly.
Not having much experience with how much the code being subject to those reviews changes, so it might be that such scenario where the code changes so drastically, that it basically makes no sense to review the "old" state of the code. I could imagine this being pretty frustrating situation for the reviewer (not that the current situation when the target can be a moving one, and there is no tracking of how the code changes, is much different from the scenario described above, it is probably even worse now).
I'd personally question the value of performing any security review at all if "the code has changed entirely" during the process of a review. IMO, that seems like an enormous waste of time and resources for a small team which already suffers from such constraints. The point of my original suggestion is two-fold 1) the current policy doesn't elicit any guidance regarding this issue 2) we need to be very specific about, and also work towards compromise, as to what is a reasonable level of volatility for a codebase during the security review process. I think that's where I was trying to go from my original proposal of a code freeze to the above 4 factors which may halt or delay a review. I've added this to the agenda for the Security Team's next group meeting (4/9/2019), so we should have some further insight after that. I'm hopeful we can reach some kind of compromise that does not inhibit wikimedia developers but also does not place unfair expectations upon the Security Team, thus driving down the quality and effectiveness of our security reviews.
As I understand the discussion, I think Bryan and Brian (B&B) are speaking mostly the same language. Reviewing a project at 90% and having a viable product seem potentially analogous to me. I think the 'minor' keyword here is meant to indicate that the handling of certain features or portions of an application should be mostly settled for review to be useful for anyone. In compliance circles the language is often "new product or significant change to existing" for review or scanning. If a codebase is going to experience significant change between a security review and deployment then there is little point, so maybe we can talk about what is significant. I've seen this done at other places with more codified release process at the "alpha" stage, which seems to equate. If major portions of a code base (more than 10%?) change or are expected to or things like PII handling, encryption, external service integrations, authentication, authorization, input handling, third party library versions, user rights and group management are expected to change then a security code review is premature. In that case we are in the arena of concept review, which is a separate process being fleshed out.
#Security-Teams Readiness Reviews workboard?
"See the #Security-Teams Readiness Reviews workboard". Where is that? Links please. Did you mean phab:tag/security-team-reviews/ instead?
Yes, it should. Fixed.
Security Readiness Review request form
The link "Security Readiness Review request form" only prefills the "Tags" field in Phabricator with #Security. Why should a new review task not go into the backlog of the #security-team-reviews project where it belongs?
Yes, it should. Fixed.
There are no older topics