17:00:03 #startmeeting nova notification 17:00:04 Meeting started Tue Jun 7 17:00:03 2016 UTC and is due to finish in 60 minutes. The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:05 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:07 The meeting name has been set to 'nova_notification' 17:00:13 hello everyone1 17:00:13 \o 17:00:14 o/ 17:00:20 o/ 17:00:22 0/ 17:00:31 boy this has gotten popular 17:01:02 everybody is eager to do some notification work :) 17:01:10 o/ 17:01:45 \o/ 17:01:58 OK. let's get started 17:02:08 #topic short status 17:02:27 we hit the non priority spec freeze in nova 17:02:50 fortunately we have bot main spec has been merged 17:03:00 so transformation can continue full speed 17:03:39 I created a wiki about legacy notifications I know about and needs to be transformed #link https://wiki.openstack.org/wiki/Nova/VersionedNotificationTransformation 17:03:56 FYI non-prio feature freeze is in 20 days 17:04:00 #link http://releases.openstack.org/newton/schedule.html 17:04:14 yes. so we have 20 days to get things merged 17:04:36 thankfully johnthetubaguy has swooped in and started liking how things look 17:04:52 many thanks to johnthetubaguy 17:04:52 +2 == easy exception if need be 17:05:26 so the must have transformations are up on review 17:05:43 if we have people want to help then please review those first 17:06:03 #link https://review.openstack.org/#/q/topic:bp/versioned-notification-transformation-newton 17:06:24 the wrap_exception, instance.delete and instance.update are the important ones first 17:06:25 wow that's a lotta +2 17:07:03 is the plan to get the move to the notification package separation done first? 17:07:38 yes, the transformation patches shall depend on the separation patches 17:07:48 and every current one are top of that so far 17:08:17 is there a priority on which notification to transform 17:08:32 so its the best to get all those merged before moving forward with the rest of notifications? 17:08:43 their implementation, i mean 17:08:49 syjulian: if you look at the wiki then you will see three tables, the first one contains the easy ones so I would suggest that first 17:09:09 but just to be aware, what's up is probably all that's going to get in for N 17:09:22 unless there was another one detailed in the original spec? 17:09:42 macsz: the best would be to get the instance.delete merged as the notification in the first table use the same payload as instance.delete 17:10:24 gibi: someone had keypair transformation as a spec too right? 17:10:28 I don't see code for that one yet 17:10:34 rlrossit: the original agreement was to have spec for the wrap_exception, instance.delete and instance.update to have the pattern in place then move forward with the others based on that. 17:10:52 rlrossit: the first table is easy as it use the same ovos 17:11:17 oh are the others not going to require a spec, they'll just be bps? 17:11:49 rlrossit: in the keypair spec they proposed a the transformation as well as some extension of the notification this is why that is needed a spec 17:12:03 ok 17:12:04 rlrossit: as far as I understand we don't need spec for the others 17:12:10 sounds good 17:12:31 so it'll just be like dictcompat stuff after these first ones 17:12:44 yeah, something like that 17:12:54 or like mox removal 17:13:41 so if somebody wants to help then take a notification from the first table 17:13:50 base your patch on the instance.delete patch 17:14:17 gibi: you said we need to have instance.delete merged first. are the other transformations depending on that? 17:14:18 the work there will be mostly testing the new sample 17:14:47 syjulian: the notifications in the first table use the same payload class as instance.delete 17:15:02 gotcha 17:16:05 I would say you can start transforming those notification before the instance.delete is merged but you have to be prepared that the instance.delete patch will change due to review comments 17:16:50 cool 17:17:13 I suggest take only one as we don't want to overload johnthetubaguy :) 17:17:30 besides notification transformation 17:17:39 there was couple of spec merged proposing new notifications 17:17:54 I updated the etherpad with links to those specs 17:18:06 we should keep an eye on them as well 17:18:16 and provide review feedback 17:18:43 as far as I knowm volume.swap and compute_node.update already has some code up to review 17:19:16 that was the 'short' update :) 17:19:21 #topic outstanding reviews 17:19:43 if you have anyithing about a specific review then lets discuss it now 17:19:58 rlrossit: thanks for the comments 17:20:12 gibi: glad to help out 17:20:20 I made sure to get through all of them before today 17:20:20 i pushed a patch on the json schema one yesterday 17:20:23 rlrossit: you had some concerns about the SCHEMA handling in nested objects 17:20:41 yeah mostly because I forgot what we decided for how those operate 17:20:52 syjulian: I will check it tomorrow, first thing in the morning 17:21:05 gibi: cool thanks! 17:21:18 so IpPayload is explicitly, so we don't need to give it a schema for where to find those things on the other objects right? 17:21:29 right I remember now 17:21:42 SCHEMA is how we tell the fields to find things on the objects passed in right? 17:21:45 rlrossit: right. IpPayload is basically calulated values 17:21:51 rlrossit: right 17:22:17 gibi: so then we'll do the same for the flavor payload right? 17:22:32 rlrossit: for johnthetubaguy FlavorPayload proposal we can create a SCHEMA for the FlavorPayload to copy the data from the Intance object 17:22:50 gibi: beat me to it, I was in the middle of typing exactly that :) 17:22:51 I have that code half done locally, it works :) 17:23:03 woot, I get it now 17:23:20 or, I guess, re-get it 17:23:40 I have to hack a bit with the obj_reset_changes but we can clean that up in the next iteration 17:23:54 I mean next iteration of the patch :) 17:24:59 rlrossit: was there anyithin major in your comments? 17:25:48 the only thing is how heavy that functional test is 17:25:58 rlrossit: yeah, right. 17:26:03 but maybe we can iron that out in later patches when we have more instance notifications to work with 17:26:45 so it's not really a concern for me, just a heads up as to where to maybe look for common things later 17:26:47 I really would like to have a test that generate a complete sample. Here IpPayload needs a simulated neutron 17:27:01 yeah, true 17:27:15 simulating neutron is messy :) 17:27:30 * rlrossit agrees 17:28:02 but I agree that we shall keep a single NeutronFixture that is good for the sample test and keep it in the test framework 17:28:33 I'm pretty sure we have a cinder fixture from when I was updating exception handling for cinder... I wonder if there's a neutron one hiding somewhere too 17:28:53 or actually I think I stubbed out the cinder client, maybe you can do the same? 17:28:53 there might be additions to it when I do the proper testing to the instance.update as that has some bandwidth field to fill 17:29:26 rlrossit: I will check but I don't remember seeing a neutron fixture before 17:30:12 https://github.com/openstack/nova/blob/master/nova/tests/functional/regressions/test_bug_1554631.py 17:30:22 here's a functional test I wrote for cinder interactions 17:30:28 though that was just exception raising 17:30:42 so probably not great to associate that with this 17:30:51 we can decide later 17:31:38 I'm already stubbing the neturon client in the fixture https://review.openstack.org/#/c/313654/22/nova/tests/functional/notification_sample_tests/test_instance.py L110 here 17:32:31 indeed you do... I missed that amidst the big stub_outs :) 17:32:40 I guess my is more ugly as I need to stub more :) 17:33:31 the stub_out is there to simplify neutron interaction but I will doublecheck if it can be removed somehow 17:34:18 I'm fine with a blind +1 on that part as long as it works... will probably just need cleanup later 17:34:36 OK 17:35:24 rlrossit: anyithing else to raise regarding the patches you saw? 17:35:49 nope, I'll shut my loud mouth so we can keep moving :) 17:36:15 rlrossit: your feedback is really valuable, thank you! 17:36:46 syjulian: do you have issues to discuss regarding the json schema work you are doing? 17:37:14 gibi: nothing right now 17:37:34 syjulian: great. Let's continue in the review then 17:37:52 any other review we have to talk about today? 17:39:03 OK 17:39:04 #topic open discussion 17:39:48 so we have a list of notifications to transform on the wiki. you can take one if you have time 17:40:00 just keep the wiki up to date 17:40:28 do you have any questions regarding this low hanging fruits? 17:41:10 what's a good way to listen to the notifications? i guess to verify they work 17:42:02 if you want to try out your code then you can set the notification driver to log in oslo 17:42:12 * gibi trying to find the name of the config 17:42:46 we discussed at one point in time to do notification testing in tempest or some other thing 17:43:03 but that's a lot longer off than you would like 17:43:09 (i.e., it isn't even started) 17:43:35 rlrossit: yeah, that would be nice to have I guess it will take some cycles to get there 17:44:40 syjulian: I'm failed to find the name of the config right now. I guess it is notification_driver in oslo.messaging but I'm not sure 17:45:11 syjulian: from test, you can use the nova.tests.unit.fake_notifier to see what was emited 17:45:39 it stubs on the lowest level in nova, so you will see everything 17:47:04 syjulian: here is the config variable https://github.com/openstack/oslo.messaging/blob/a705e0b6802dbe3ae44d9474bc9f344041cf9e13/oslo_messaging/notify/notifier.py#L33 17:47:51 if you set 'driver' to 'log' then the notifications will be logged with the standard logging instead of emited on the rpc 17:48:06 nice 17:48:24 it's in the [oslo_messaging_notifications] section 17:48:27 of nova.conf 17:49:04 rlrossit: is it not just [oslo_messaging] section? 17:49:42 gibi: ummm... it might depend on if you're using 2 different queues 17:50:04 but I think it when over to [oslo_messaging_notifications] when I split out the queues a few months ago 17:50:10 s/when/went 17:50:46 honestly I tried it many months ago so you must be right 17:50:59 would [oslo_messaging] imply every message not just notifications be set to logging? 17:51:40 not sure 17:51:45 probably 17:51:55 that would get a little noisy 17:53:43 OK. We still have 6 minutes if anyobody has any questions 17:55:02 rlrossit: Can I ask you a favor? Could you give a short update on the nova team meeting on Thursday? That is a bit late for me 17:55:13 yup, can do! 17:55:23 * rlrossit should've taken notes 17:55:26 great! thanks a lot! 17:56:24 if nothing else then let's close. 17:56:30 thank you all for joining 17:56:31 have to go, thanks for the meeting :) 17:56:46 #endmeeting