17:00:00 #startmeeting nova notification 17:00:01 Meeting started Tue Sep 26 17:00:00 2017 UTC and is due to finish in 60 minutes. The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:05 The meeting name has been set to 'nova_notification' 17:00:28 hello 17:00:33 o/ 17:01:31 let's get started. if others join later they can read back :) 17:01:47 weekly status mail: #link http://lists.openstack.org/pipermail/openstack-dev/2017-September/122589.html 17:02:15 The first thing I'd like to talk about is the apiu.fault notification 17:02:29 I have a bug open https://bugs.launchpad.net/nova/+bug/1699115 17:02:31 Launchpad bug 1699115 in OpenStack Compute (nova) "api.fault notification is never emitted" [Medium,In progress] - Assigned to Balazs Gibizer (balazs-gibizer) 17:02:53 and an ML post asking for way forward 17:02:59 http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html 17:03:20 as I got none I went forward and proposed on of the possible way forward 17:03:25 https://review.openstack.org/#/c/505164/ Remove dead code of 17:03:25 api.fault notification sending 17:03:26 is this both legacy and versioned? 17:03:30 it is just legacy 17:03:55 I realized that is not emitted for a long time when we tried to transform it to versioned 17:05:13 it is definetly not emitted since v2.0 is supported through v2.1 code base 17:05:35 ok, so reading the commit message, it sounds like we regressed this when doing the 2.1 wsgi layer 17:05:45 yes, I think so 17:05:47 which isn't surprising if it wasn't tested as a stack 17:07:16 so my proposal is to simply drop state that api.fault notfication is not a thing 17:07:25 s/drop// 17:07:44 added in 2012 https://review.openstack.org/#/c/13288/ 17:08:00 for rax, when they were v2, and i don't think rax ever got to the point of supporting/using microversions 17:08:08 johnthetubaguy or mikal might know the details there 17:08:51 do you imply that we might actually want to fix this old regression? 17:09:14 no, just collecting data 17:09:18 OK 17:09:48 I can ask the guys if they see any use of api.fault 17:09:50 sorry just leaving comments on the change 17:10:02 neither of those guys work at rackspace anymore 17:10:06 but they might know about this 17:10:57 sure 17:11:04 did your ML thread cross post to the ops list? 17:11:27 nope 17:11:33 actually not, so that would be good as wekk 17:11:35 well 17:11:49 yeah, so i can reply and cross post it, and say we are going to remove it if no one speaks up 17:12:04 that works for me 17:13:05 lets move on 17:13:44 the next is a friendly reminder that the bdm optimization patches are actually ready https://review.openstack.org/#/q/topic:bug/1718226 17:14:13 and now tight to a bug instead of a bp 17:14:53 yeah, that's on my list, 17:14:56 but so are lots of things 17:14:58 cool 17:15:12 I mean that it is cool that is on the list :) 17:15:18 anyhow moving on 17:15:48 I've just realized that the our transformation burndown chart needs to move to openshift 3 17:15:58 and openshift 3 does not have cron jobs in the free account 17:16:09 so I actually have to find another free hosting 17:16:32 it means that the burndown might be unavailable after 30th for a while 17:16:52 ah 17:17:06 maybe ask sdague what he used for the api-ref burndown chart? 17:17:16 I can do that 17:17:18 it might have been his own server though 17:17:33 another option is maybe openstack-infra has ideas for hosting 17:17:41 I could use my raspberry but that only has dynamic dns and that is filtered in some part of the world 17:18:18 I will ask sdague and keep the raspberry as option B 17:18:51 OK, moving on 17:19:15 last item on my list is Takashi's comment on https://review.openstack.org/#/c/475860/ 17:19:48 in short I'm proposing to use common json pieces for the notification samples to avoid duplications 17:19:54 but it has a side effect 17:20:42 the common json sample fragment might not what we want to show in the doc as sample 17:21:09 ah yeah, that could get confusing... 17:21:15 as it might not 100% a real sample becuase some of the parts are replaced during the test execution only 17:21:50 I'm still in the process of thinking about possible solutions 17:22:01 one is to store the replacement as file as well 17:22:27 and applied the those replacements not only in the test execution but also during the doc generatiion 17:22:43 this means more files :/ 17:22:46 that might be more trouble than it's worth 17:22:51 yepp 17:22:51 and it's confusing and complicated 17:22:53 for new contributors 17:23:58 basically we now have 80 lines of duplicated sample for 8 lines of difference 17:24:07 for each event_type 17:24:25 so I still want to reduce that duplication if possible 17:24:42 but I'm a bit stuck about the how 17:25:18 can you remove the problem fields from https://review.openstack.org/#/c/452819/11/doc/notification_samples/payloads/InstanceActionPayload.json ? 17:25:24 like task_state, power_state, vm_state? 17:26:05 or, provide a way to override what's in the template in https://review.openstack.org/#/c/475860/5/doc/notification_samples/instance-power_off-end.json ? 17:26:10 hm, that mean I have to merge json fragments instead of combining them. but most probably doable 17:26:14 or would those show up twice? 17:26:47 we anyhow have our own jsonref implementation so we can tweak that not to duplicate fields 17:27:02 and then we should not call it jsonref any more :) 17:27:15 gibiref 17:27:19 lol 17:27:36 I feel that this could work. thanks for the idea, I will dig into it 17:28:14 that was my last item for today. Do you have things to discuss? 17:28:42 nope 17:28:50 just trying to do some scale testing for dan's instance list changes 17:28:55 so we have a benchmark 17:29:13 I'm wondering if we want to make that a ci job for some performance testing 17:29:33 there is or was a rally ci job at one point 17:29:36 it can even uncover bugs that only apperas if lot of instances booted in parallel 17:29:50 and tempest used to have the largeops job that would create 300 instances at a time, 3 times in a row 17:30:05 but we didn't want either as voting since they were too unpredictable 17:30:24 what we'd really need is something that runs periodically and posts results so we can see trends 17:30:45 I guess your trial with fakedriver is more predictable as it doesn't really spawn instances 17:30:46 it was on my own server, that being said, I'd be fine hosting something else if we have pattern here for the burndowns 17:31:23 sdague: thanks for the offer. 17:31:45 the notification burndown is a fork of your api ref burndown 17:32:11 it is a static html + some json data generated periodically 17:32:25 https://github.com/gibizer/nova-versioned-notification-transformation-burndown 17:34:12 it has some glue stuff for openshift 2 but that will be useless after end of this month 17:36:14 anyhow I think we can sort out the details outside of this meeting 17:36:39 if nothing else then I will close the meeting 17:37:26 then thanks for the discussion 17:37:30 #endmeeting