18:01:07 #startmeeting networking_policy 18:01:08 Meeting started Thu Sep 7 18:01:07 2017 UTC and is due to finish in 60 minutes. The chair is SumitNaiksatam. Information about MeetBot at http://wiki.debian.org/MeetBot. 18:01:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 18:01:11 The meeting name has been set to 'networking_policy' 18:01:18 rkukura: thanks for chairing the meeting last time 18:01:33 tbachman: thanks for shepherding the discussion on the DB migrations 18:01:46 annakk: thanks for bringinging up the UI patches 18:01:51 SumitNaiksatam: not sure I shepherded as much as watched 18:01:57 the sheep moved themselves ;) 18:02:05 and my apologies for going MIA at the last minute 18:02:07 baa 18:02:07 SumitNaiksatam: +1k to annakk 18:02:11 tbachman: lol! 18:02:15 rkukura: lol 18:02:19 :) 18:02:31 #info agenda https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Sept_7th_2017 18:02:36 we have a bit to cover today 18:02:39 lets get rolling 18:02:50 #topic mitaka, newton DB migration backport strategy 18:02:57 #link https://review.openstack.org/#/c/499185/ 18:03:03 #link https://review.openstack.org/#/c/499240 18:03:34 i read the transcript of the last meeting, and i meant to respond last thursday itself, somehow didnt happen, and then fast forward to today :-) 18:03:51 SumitNaiksatam: no worries. Good to get it firsthand here 18:04:01 i agree with the rkukura and the general point that we have a short term problem to fix and then a longer term decision to make 18:04:03 (and thanks for catching up via transcript) 18:04:11 tbachman: np 18:04:40 we cannot do any more releases without solving the current mess we are in (regardless of what longer term strategy we use) 18:04:52 so lets first try to resolve that 18:05:19 would it make more sense to start with the mitaka to newton migration issue first? 18:05:33 SumitNaiksatam: Can we assume master or stable branch trunk chasing use cases are not a concern? 18:05:35 (the discussion will most likely apply to newton to ocata as well) 18:05:46 rkukura: yes, we never claimed to support them 18:05:54 rkukura: i mean we support them 18:06:03 but we did not claim that they would not break 18:06:08 ok, just wanted to make that explicit 18:06:22 rkukura: very good point, thanks for stating that upfront 18:07:29 so rkukura and tbachman you idenfied that there are a couple of migrations that are skipped between mitaka and newton? 18:07:36 for mitaka->newton upgrade, I see two migraitions that will be missed 18:07:51 (these being different from the ones that are missing between newton and ocata) 18:08:03 5239 tenant_id->project_iod 18:08:16 c460 NFP devstack 18:08:24 rkukura: okay 18:08:31 SumitNaiksatam: here was the list I’d compiled: https://pastebin.com/sstdM9ZZ 18:08:34 i also want to make a general comment regarding NFP 18:08:41 tbachman: great thanks 18:08:47 SumitNaiksatam: np! 18:09:02 * tbachman doesn’t have #link powers 18:09:08 or maybe I do? 18:09:08 oh 18:09:15 #link https://pastebin.com/sstdM9ZZ <= DB migrations 18:09:20 #chairs rkukura tbachman annakk 18:09:29 #chair rkukura tbachman annakk 18:09:30 Current chairs: SumitNaiksatam annakk rkukura tbachman 18:09:32 I guess I didn’t have topic powers 18:09:40 but I did have link 18:10:00 tbachman: I don’t see 5239 on your list 18:10:11 hmmmm 18:10:20 * tbachman tries to remember why 18:10:29 so my comment on NFP, to the extent i am aware of, we dont have a user currently deploying NFP on mitaka or newton 18:10:57 tbachman: is it because tenant_id wasn’t renamed project_id in the DB until newton? 18:11:23 * tbachman is looking 18:11:33 so while we should try our best to maintain the integrity of that component, we can make changes to that component without fear of disruption 18:11:34 * rkukura has not looked at when neutron renamed it 18:12:33 rkukura: which patch introduced that? 18:12:41 tbachman: to gbp? 18:12:46 yeah 18:12:48 i think the change has been going in neutron for a while 18:12:57 but i think we were forced to make the change in newton? 18:13:14 annakk did the change in GBP if recall correctly 18:13:19 yes 18:13:27 and we didnt want to backport it then 18:13:43 ah 18:14:02 rkukura: I think my list was based on the most recent migration in stable/mitaka 18:14:06 so it likely doesn’t cover that one 18:14:10 (and good point) 18:14:24 yes it was annakk’s newton patch, so we don’t want it on mitaka 18:15:06 but it needs to run when upgrading from mitaka->newton, and it will not because mitaka’s HEAD is beyond it in newton’s chain 18:15:10 #link https://github.com/openstack/group-based-policy/commit/f7f1ce349e2ed7579f64430c8cf8f0f5137c8eec#diff-9ae7c407a57ed5bb5e916dbbf4530e8b <= patch with DB migration 18:15:17 rkukura: ack 18:15:24 rkukura: ack 18:16:26 so can we introduce conditional migrations at the HEAD that adds those changes only if they are missing? 18:16:36 seems like, on stable/newton, we need to make that migration idempotent and move closer to HEAD 18:16:51 SumitNaiksatam: I think we are saying the same thing 18:16:53 SumitNaiksatam 18:16:55 rkukura: right 18:17:06 I think its the best way to go 18:17:27 if we care about same-branch migrations 18:17:31 same thing for the NFP patch? 18:18:04 rkukura: i would vote in favor of doing it for the NFP patch as well, just to maintain the overall consistency 18:18:28 SumitNaiksatam: right, even if someone isn’t using NFP, the DB schema needs to be consistent with the code 18:18:32 the migration patch would however have to be introduced in master, and would have to backport its way to newton 18:18:58 rkukura: agree, depending on the schema change 18:19:11 SumitNaiksatam: I agree the same issue with the same two patches will be issues on stable/ocata and master 18:19:16 i guess thats what you mean by “consistent" 18:19:34 rkukura: ack 18:20:12 so there are two more migrations that will be missed on upgrade from mitaka or newton to ocata: 18:20:18 hmmm 18:20:22 da6a QoS 18:20:31 bff1 NFP 18:20:59 okay, rkukura thanks for digging those out 18:22:50 and my notes have one that at that time was HEAD on master but not back-ported to ocata: 4c0c APIC 18:24:10 so should our strategy be to do one patch that fixes everyhing on master that might be missed when upgrading from mitaka, and then manually back-porting the needed parts of that to stable/ocata and stable/newton? 18:24:43 rkukura: ack on one patch 18:24:44 rkukura: seems like a sound approach 18:24:54 agree 18:24:59 stable/ocata would be identical to master 18:25:29 SumitNaiksatam: I agree it should be, until there is something new on master that doesn’t get back-ported 18:25:55 rkukura: right 18:26:15 there is a some difference between newton and ocata 18:26:24 namely, QoS and the NFP changes 18:26:30 SumitNaiksatam: right 18:26:33 okay so assuming we follow this plan 18:26:56 we will be left with different chains on different branches 18:27:00 I could do these patches, but probably wouldn’t get to them til mid next week. I’m flying BOS->SFO tomorrow (might be able to do them on the plane) and taking PTO Monday 18:27:03 should we revert the ip_pool changes the? 18:27:17 s/the/the/ 18:27:18 dang 18:27:21 s/the/then/ 18:27:27 * tbachman checks his “n” key 18:27:42 tbachman: 27b7 is on master, ocata, and newton, right? 18:27:47 rkukura: ack 18:27:57 but placement was the question 18:28:10 just wasn’t sure if it made things simpler 18:28:15 * tbachman is fine with leaving it as-is 18:28:29 but — I’ll revert the other 2 DB migration patches that I have 18:28:30 tbachman: you are thinking that we should put this change also in the “uber” backport patch? 18:28:32 is there a reason to have that change on newton, or not? 18:28:57 the IPv6 feature is all the way back to stable/mitaka 18:29:08 rkukura: ideally i think it should be all the branches 18:29:09 the question is whether we want the same behavior of ip_pool 18:29:29 my thinking is that it should be the same 18:29:41 tbachman: ack 18:29:45 tbachman: ack 18:30:04 which leaves the stable/mitaka patch for that feature 18:30:27 #link https://review.openstack.org/#/c/499185/ <= backport of ip_pool size increase to stable/mitaka 18:32:25 so if we back-port this to stable/mitaka and 27b7 becomes HEAD on stable/mitaka, we will have more patches that would be missed upgrading from mitaka to newton or ocata 18:32:37 s/patches/migrations/ 18:32:51 rkukura: so thats the longer term issue 18:33:00 guess so 18:33:11 my earlier comment - “we will be left with different chains on different branches” 18:33:29 so what do we do for future backports once we have cleaned up the mess 18:33:47 SumitNaiksatam: good transition 18:33:56 (ip_pools falling into that future backports category though we already have the patch ready) 18:34:16 why do we backport agressively? 18:34:25 annakk: to support users :-) 18:34:29 one option is to try to keep the DB schema identical across the active branches 18:34:38 rkukura: right 18:35:07 users are reluctant to go forward in release? 18:35:09 annakk: typically the users, once they deploy a particular release, dont move as quickly 18:35:20 annakk: within the same release, yes 18:35:30 but moving forward to a new release takes more doing 18:35:41 i know people still using liberty 18:36:10 i mean legit users in production 18:36:29 annakk: also, GBP was evolving rapidly, so users wanted the latest GBP 18:36:40 rkukura: ack 18:36:55 i think there's a general consensus in openstack to avoid backporting features that require schema changes 18:37:09 annakk: completely agree and understand 18:37:24 annakk: but in those cases, the distro vendors are still doing the backporting 18:38:11 albeit selectively and perhaps on a per customer basis 18:38:18 * tbachman notes there are 2 classes of openstack users. 1) pre-production, who want the latest openstack release; 2) production users, who never want to move to a new release ;) 18:38:27 tbachman: lol 18:38:33 :) 18:38:52 tbachman: and on (2) not only do they dont want to move, they still want all the latest features :-) 18:39:05 tbachman: Also, 1.5) want newest GBP on older OpenStack 18:39:08 and that latter part makes it really difficult to support 18:39:08 lol 18:39:25 rkukura: i categorized that as 2.5 18:39:53 anyway, annakk to your question, we decided to take the simple way out - 18:39:59 that is try to keep all branches consistent 18:40:10 so should GBP start playing by the rules at some point? 18:40:20 that is feature wise, to the extent possible 18:41:02 SumitNaiksatam: sounds like its only possible if new features don't conflict with neutron db all the way back 18:41:09 rkukura: i think if it was left to us, it would be much easier for us to just play by the rules since it reduces this headache of backporting 18:41:27 annakk: yes sure, and mostly we havent hit issues in that regard (luckily) 18:42:16 if GBP’s official stable branches followed OpenStack backporting rules, it would be possible to maintain different branches (unstable/mitaka, unstable/newton, unstable/ocata, …) that more aggressively back-port, but that is more work 18:42:53 rkukura: exactly i was just saying that, we dont have a “distro” vendor like entity doing that “unstable” branch work for us 18:43:07 for other openstack projects its being done by those distro vendors 18:43:15 SumitNaiksatam: right 18:43:42 thanks for the 101, makes sense 18:43:42 but even then, new features are typically not back-ported, are they? 18:43:58 s/are/aren’t/ 18:44:00 rkukura: i agree, mostly not features 18:44:55 but i guess its easier for them to put the stake in the ground for those projects 18:46:12 of course we can decide to the same thing here, but i am not sure if this is really beyond our control in the sense that if a user asks for a feature be backported, would we actually be able to say no! 18:46:42 * tbachman wonders if we’re any closer to a path forward here 18:47:01 tbachman: thanks for the reality check :-) 18:47:11 sorry — not trying to be negative 18:47:25 seems like we have several options 18:47:29 tbachman: no, didnt mean to suggest that you were implying that 18:47:31 it would be good to keep some flexibility 18:48:23 one thing is for sure, we need to be much more selective in backporting features/patches that involve DB migrations (assuming we even decide that we will allow those and have a strategy around it) 18:48:26 rkukura: ack 18:48:41 I think a good case to think about is 5239 (tenant->project), which should definitely not be back-ported to stable/mitaka, but is definitely needed when upgrading from mitaka to anything newer 18:48:44 this current mess is a good opener in that sense 18:49:30 rkukura: so that would be the argument against making all migration chains identical? 18:50:17 SumitNaiksatam: you had mentioned the idea of backporting with a no-op 18:50:48 tbachman: thanks, i did, but that is not the ultimate solution, because that introduces the issue of skipping that migration all together 18:50:57 when migrating from one branch to the next 18:51:16 i meant the no-op only for upgrades witthin the same branch 18:51:41 instead of no-op we could do conditional, but i dont think that would solve the problem 18:51:49 completely 18:51:59 we have 9 mins remaining 18:52:05 I don’t see how no-op solves anything 18:52:38 * tbachman nods 18:53:06 no-op could potentially solve the problem of temporarily get the chains to be temporarily identical across the branches 18:53:41 but they can't stay identical forever, can they? 18:53:43 I think it might help to keep the migrations that are not backported to older branches in front of (closer to HEAD than) migrations that are back-ported 18:53:54 but we are solving the short term problem in a better way here by introducing the idempotent migrations at the head 18:54:12 annakk: agree 18:54:18 no-op makes the chains look identical, but they still are not really identical 18:54:31 annakk: but we would have more control over what we merge in the future 18:54:46 rkukura: right, schema is not identical 18:55:08 how do we handle situation when feature X is not backported, but after that comes feature Y that is backported? 18:55:16 rkukura: regarding your earlier point, it can still get us to same situation we are currently in 18:55:18 no-op for X? 18:55:49 * tbachman understands why the OpenStack policy is not to backport DB migrations :( 18:55:55 i dont think we should discuss no-op since it doesnt solve the longer term problem 18:56:05 sorry that i brought it up :-( 18:56:15 SumitNaiksatam: I did actually — sorry 18:56:24 (in this meeting, that is) 18:56:40 tbachman: i take the responsibility 18:56:41 okay so in the interest of time 18:56:49 we will have to go back to emails 18:56:54 lets try to close this by tomorrow 18:57:07 perhaps we can take a break and think again with some more clarity 18:57:16 #topic UI patches 18:57:21 i have tried the ocata UI 18:57:32 annakk: does your patch fix the issues you are seeing? 18:57:34 I think a policy on ordering the migrations when back-porting could at least prevent the need to shuffle them and make them idempotent going forward 18:57:53 rkukura: definitely 18:58:07 lets cover UI 18:58:13 sorry i meant i *have not* tried Ocata UI 18:58:16 SumitNaiksatam: it fixes one issue, and I'm not sure about the rest since I discovered issues linger after horizon restart 18:58:20 i have tested newton recently 18:58:31 annakk: okay, i will give it a try 18:58:56 sorry that you ran into this after we released ocata, should have been caught before release 18:59:19 annakk: thanks also for starting to look at the UI patchers 18:59:24 tbachman: thanks to you as well 18:59:32 #topic py27 logs 18:59:39 SumitNaiksatam: np! 18:59:43 I'll continue testing it above the fix 18:59:53 please take look here #link https://wiki.openstack.org/wiki/Meetings/GroupBasedPolicy#Sept_7th_2017 18:59:56 annakk: thanks 19:00:06 in the past we had cleaned up 19:00:25 but again we have a huge amount of repetitive logging 19:00:39 makes it very difficult to debug test failures 19:00:49 we need to clean up/supress 19:01:00 and check if there are any real issues hidden here in the error message 19:01:04 *messages 19:01:08 thanks all for joining today 19:01:10 #endmeeting