15:03:00 #startmeeting neutron_upgrades 15:03:01 Meeting started Mon Jan 25 15:03:00 2016 UTC and is due to finish in 60 minutes. The chair is ihrachys. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:03:02 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:03:04 The meeting name has been set to 'neutron_upgrades' 15:03:10 o/ to all 15:03:24 #topic Organisational matters 15:03:48 rossella_s: wanna talk about objects sprint? 15:03:55 ihrachys, sure! 15:04:18 ihrachys, so the idea is to gather in Europe to make some progress regarding the introduction of ovo 15:04:35 anybody interested? 15:04:43 I'm interested 15:04:44 me obviously :) 15:04:48 :P 15:04:52 me too 15:04:58 mhickey: cool 15:05:12 ihrachys: would need travel approval first 15:05:15 me too! 15:05:33 ok, I guess we have some ideas about what the place would be. One suggestion was Brno (Red Hat office), another was Nuremberg (SuSE) 15:06:21 I am obviously welcoming everyone to Brno :D [though I will need to check with local office guardians to make sure it would be avail] 15:06:37 what would be the time for that? 15:06:43 regarding Nurember it's confirm that it's available 15:07:06 ihrachys, good question...end february? 15:07:13 how do others feel? 15:07:27 unless someone plans to go to Minnesota, I am fine with end of Feb 15:07:32 don't do end of feb please 15:07:46 I'm already going to QA midcycle end of feb... which is booked the same week as neutron midcycle 15:07:54 err - which neutron midcycle double booked 15:07:57 not last week of februrary for me 15:07:58 please don't triple book. :( 15:08:09 15th? 15:08:14 ok, let's look middle of March maybe? 15:08:19 February* 15:08:25 it depends how much we would like to achieve 15:08:29 * ihrachys wonders how to check against over-booking 15:08:31 march please. Enough time to ask for travel approval 15:08:44 March is ok for me 15:08:50 also not get screwed by last minute airfare 15:08:51 korzen: elaborate 15:09:16 I mean that Mitaka-3 is at beginning of the March 15:09:33 if we would like to progress with OVO for mitaka we should gather faster 15:09:51 korzen: probably a good point 15:10:00 March would be too late 15:10:07 o/ 15:10:16 korzen: I suspect there is no feature-wise push for getting it exactly in Mitaka. If we do progress for very start of N, that would still be a reasonable time for that. 15:10:20 yeah but we didn't plan well enough in advance to do an in person meeting 15:10:33 * SamYaple is from Kolla team, sorry for being late 15:10:41 SamYaple: hi! 15:10:47 now we're going to try and get everyone to book with about 2 weeks notice to fly to europe? ehhhhh 15:11:06 korzen: I think we can still make some mitaka progress offline too. it's not like we will wait till March to do anything. 15:11:26 ok, just saying my point 15:11:47 sc68cal: what would be a better time? mid March ok or too early? 15:12:01 I think mid march is reasonable 15:12:54 ok, let's figure it out really quick. rossella_s, let's discuss afterwards around location. 15:12:57 * sc68cal is being opinionated because he's going to submit travel budget for this 15:13:07 sc68cal: that's pretty cool :) 15:13:36 sc68cal: btw fwaas 2.0 could be a good candidate for ovo too ;) 15:13:46 ok, let's move on 15:13:48 #topic Partial Multinode Grenade 15:13:51 how long would the sprint take? 15:13:51 ihrachys: indeed. We talked about that at the midcycle 15:14:06 #undo 15:14:07 Removing item from minutes: 15:14:12 korzen: that's a good question. 15:14:18 3 days? 15:14:20 3 days? 15:14:21 yes 15:14:24 :) 15:14:54 I guess 3 days is the minimum amount of time to make decent progress 15:15:01 ok, to lets say it will be 3 days and starting monday 7th or 14th 15:15:16 14th is more mid march ;) 15:15:36 ok, moving on 15:15:39 #topic Partial Multinode Grenade 15:16:03 sc68cal: I think we are almost there with https://bugs.launchpad.net/neutron/+bug/1527675 15:16:04 Launchpad bug 1527675 in neutron "Neutron multinode grenade sometimes fails at resource phase create" [High,Confirmed] - Assigned to Sean M. Collins (scollins) 15:16:04 right? 15:16:18 yep 15:16:18 just a patch or two to merge to fix that mtu issue. 15:16:29 just need your backport to devstack for stable/liberty to go through 15:16:33 and the devstack-gate change 15:16:39 * sc68cal fetchese URLs 15:16:44 *fetches 15:17:15 yeah. I believe once those are in, we'll have 3 test failures to fix, all while ssh-ing using floating IP 15:17:58 I actually thought those will be fixed by some of https://review.openstack.org/#/q/status:open+project:openstack-infra/devstack-gate+branch:master+topic:multinode-neutron-mtu but it did not look that way when I checked gate results. 15:18:28 we'll probably need another bug to report once we get latest gate results with MTU fixes in 15:18:32 ack. 15:18:34 https://review.openstack.org/267605 15:18:41 https://review.openstack.org/267847 15:18:56 for those following at home 15:19:07 I was playing with all needed patches with https://review.openstack.org/#/c/265759/ fake patch. anyone can use it to check experimental to collect latest logs. 15:20:00 sc68cal: that MTU discussion on openstack-dev, does it reveal any specific action items that could help the job? 15:20:12 * ihrachys hasn't checked it just yet 15:20:31 ihrachys: not yet. I think we're still in the exploring phase. Sam-I-Am has hardware and is poking things 15:21:00 but the consensus is this whole thing is just silly and we need to simplify this whole thing. 15:21:15 it won't be neutron anymore, will it? 15:21:31 I mean if we as neutron-devs can't set the MTUs correctly at the gate for our CI jobs how in the hell is anyone else supposed to have a chance at this 15:22:05 jumbo frames! isn't that what we suggest now? 15:22:17 I mean, allowing them on physical layer 15:22:39 right 15:23:12 but if you don't have access to jumbo frames but want to do tunnels.... we screw them 15:23:30 yeah. ok, we'll dig more. and the next is... 15:23:33 #topic Object implementation 15:23:45 sc68cal: devs ... its a ops issue ;) 15:24:13 on that side, I think we finally started with some progress. there are patches for port and network, also allowed addresses 15:24:20 allowed address pairs: https://review.openstack.org/#/c/268274/ 15:24:29 port: https://review.openstack.org/#/c/253641/ 15:24:36 subnet: https://review.openstack.org/#/c/269056 15:25:02 Subnet is: https://review.openstack.org/#/c/264273 15:25:21 oh, sorry 15:25:36 need to fix the agenda :) 15:26:17 yep the agenda is out-dated, my fault too 15:26:19 there is also a follow up for the hasher test that should save against unexpected API changes for objects: https://review.openstack.org/270230 15:26:40 the test was working for the most part, but just not guaranteed :) 15:27:36 ok, moving forward 15:27:39 #topic other patches 15:28:05 mhickey successfully pushed has_offline_migrations that week: https://review.openstack.org/248190 Congrats! 15:28:25 ihrachys: thanks and to all reviewers 15:28:36 that should help folks to automate expand-only online upgrades (actually requested by ansible folks) 15:29:07 also ajo proposed upgrade patch for rpc callbacks: https://review.openstack.org/265347 15:29:36 that's somewhat related to versioned objects (currently affecting qos objects only, but will later be used for other resources) 15:29:38 lots of stuff to review 15:29:42 oh yeah 15:30:16 one question regarding the rpc callback, can someone point to code where the OVO is sent over wire> 15:30:17 ? 15:30:30 sec 15:30:59 korzen: https://github.com/openstack/neutron/blob/master/neutron/api/rpc/handlers/resources_rpc.py#L139 15:31:07 ok thx ihrachys 15:31:36 that rpc callbacks patch is a scary beast, the more eyes the better. 15:32:06 ok. speaking of partial upgrades, we still need that backport to fix rolling upgrade for security groups for K->L: https://review.openstack.org/268697 15:32:19 (not that we are going to gate on it but still) 15:32:49 there is also a small devref change to clarify our current strategy for rolling upgrades for notifications: https://review.openstack.org/268125 15:33:44 that's about it on patches side from me. anything else we should care? 15:34:03 not that I know 15:34:04 speaking about the https://review.openstack.org/#/c/269056 15:34:11 Use Oslo Versioned serializer for RPC messages 15:34:28 it is affecting the OVO sent over main RPC 15:34:44 because now, we will be sending only dicts, without metadata 15:34:58 ah right, that one. I was not sure about that one. at least until we have a case where we push an object directly, omitting modules like rpc callbacks 15:35:48 korzen: the way it's currently handled for rpc callbacks is that version is implied from topic name. 15:36:17 ok, so metadata is sent 15:36:22 so no* 15:36:26 there may be cases when we want to sent metadata as part of payload, but I would like to see them before we go with using that serializer. 15:36:59 ok, the serializer would hit us twice them 15:37:03 korzen: hm, I should actually check it. maybe it does. it's just that it's not vital there. 15:37:15 one hit will be when introducing OVO on agent side 15:37:56 and then introducing the serializer will impact the agent again 15:38:24 not sure I follow 15:39:21 when we sent OVO over wire, we should be able at agent side rebuild the OVO object to see if agent is able to handle the version 15:40:07 reasonable would be to introduce the serializer before sending the OVO 15:40:33 but maybe someone more experienced with OVO had better point of view 15:41:52 korzen, not to discourage you but I think the idea is that for now this serializer is not need so you can start working again on this patch when there's a real need 15:42:16 it can be also done step by step: OVO at server side, serializer, OVO at agent side 15:42:57 let's first introduce OVO then we can work on the serializer. Without OVO in place I don't see the point 15:43:54 ok, for OVO at serve side it would be OK 15:44:46 for the very first OVO phase, I would not expect us to expose objects to agents. that would require RPC refactoring, which is a separate deal. 15:44:46 korzen, and thanks for all your work! you can resume the serializer when time is ripe for it 15:45:20 ok, next is... 15:45:26 #topic object ERD 15:46:30 not much on that one; but fyi ski2 sent me some core resource models' ERD in private, I will need to look into it and will send it to openstack-dev@ 15:46:52 or maybe just ask him to share with everyone :) 15:47:05 :) 15:47:13 that diagram should help us to make calls on what's next to tackle for objectification 15:47:30 ok, and next is... 15:47:33 #topic Open discussion 15:47:36 I started an extension of sphinx to autogenerate the diagrams, 15:47:54 electrocucaracha: oh nice. are you working in sync with ski2? 15:48:02 yes 15:48:36 ohh sorry, I'm in sync with saisriki 15:49:02 but, I'll send the instructions to ski2 as well 15:49:15 electrocucaracha: ok, as long as it's not parallel efforts :) 15:49:29 electrocucaracha: and thanks! :) 15:50:12 electrocucaracha, nice nickname :D 15:50:23 :) thanks rossella_s 15:50:53 I was working on the ERD in sync with electrocucaracha and ski2 15:51:21 The ERD is available for viewing at https://www.gliffy.com/go/html5/9741595?app=1b5094b0-6042-11e2-bcfd-0800200c9a66 15:52:48 saisriki: note it's some proprietary webapp with trial period. we need something more stable for that, even short term :) 15:53:22 ack 15:53:33 thanks again 15:53:38 anything else folks? 15:53:39 my understanding is that the final solution will be an script to autogenerate it 15:54:07 electrocucaracha: that's the ideal, yes 15:54:56 https://github.com/electrocucaracha/schemadisplay_sphinx 15:55:18 that's the sphinx extension that I'm working 15:55:36 nice. will it get a separate pypi package? 15:55:42 or do we put it into neutron tree? 15:56:07 I don't think so, I just place the code there 15:56:13 ihrachys: we probably just add it to conf.py in neutron's doc/ 15:56:26 in the list of extensions, and then add it to requirements 15:56:54 doc/source/conf.py 15:56:55 that's ideal. that would mean a pypi package for the extension though. 15:57:36 ok, once we get code, we'll handle the integration in some way :) 15:57:47 thanks all for joining! 15:57:57 and have a great week :) 15:57:58 #endmeeting