17:00:00 <gibi> #startmeeting nova notification
17:00:01 <openstack> Meeting started Tue Sep 26 17:00:00 2017 UTC and is due to finish in 60 minutes.  The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:03 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:05 <openstack> The meeting name has been set to 'nova_notification'
17:00:28 <gibi> hello
17:00:33 <mriedem> o/
17:01:31 <gibi> let's get started. if others join later they can read back :)
17:01:47 <gibi> weekly status mail: #link http://lists.openstack.org/pipermail/openstack-dev/2017-September/122589.html
17:02:15 <gibi> The first thing I'd like to talk about is the apiu.fault notification
17:02:29 <gibi> I have a bug open  https://bugs.launchpad.net/nova/+bug/1699115
17:02:31 <openstack> Launchpad bug 1699115 in OpenStack Compute (nova) "api.fault notification is never emitted" [Medium,In progress] - Assigned to Balazs Gibizer (balazs-gibizer)
17:02:53 <gibi> and an ML post asking for way forward
17:02:59 <gibi> http://lists.openstack.org/pipermail/openstack-dev/2017-June/118639.html
17:03:20 <gibi> as I got none I went forward and proposed on of the possible way forward
17:03:25 <gibi> https://review.openstack.org/#/c/505164/ Remove dead code of
17:03:25 <gibi> api.fault notification sending
17:03:26 <mriedem> is this both legacy and versioned?
17:03:30 <gibi> it is just legacy
17:03:55 <gibi> I realized that is not emitted for a long time when we tried to transform it to versioned
17:05:13 <gibi> it is definetly not emitted since v2.0 is supported through v2.1 code base
17:05:35 <mriedem> ok, so reading the commit message, it sounds like we regressed this when doing the 2.1 wsgi layer
17:05:45 <gibi> yes, I think so
17:05:47 <mriedem> which isn't surprising if it wasn't tested as a stack
17:07:16 <gibi> so my proposal is to simply drop state that api.fault notfication is not a thing
17:07:25 <gibi> s/drop//
17:07:44 <mriedem> added in 2012 https://review.openstack.org/#/c/13288/
17:08:00 <mriedem> for rax, when they were v2, and i don't think rax ever got to the point of supporting/using microversions
17:08:08 <mriedem> johnthetubaguy or mikal might know the details there
17:08:51 <gibi> do you imply that we might actually want to fix this old regression?
17:09:14 <mriedem> no, just collecting data
17:09:18 <gibi> OK
17:09:48 <gibi> I can ask the guys if they see any use of api.fault
17:09:50 <mriedem> sorry just leaving comments on the change
17:10:02 <mriedem> neither of those guys work at rackspace anymore
17:10:06 <mriedem> but they might know about this
17:10:57 <gibi> sure
17:11:04 <mriedem> did your ML thread cross post to the ops list?
17:11:27 <mriedem> nope
17:11:33 <gibi> actually not, so that would be good as wekk
17:11:35 <gibi> well
17:11:49 <mriedem> yeah, so i can reply and cross post it, and say we are going to remove it if no one speaks up
17:12:04 <gibi> that works for me
17:13:05 <gibi> lets move on
17:13:44 <gibi> the next is a friendly reminder that the bdm optimization patches are actually ready https://review.openstack.org/#/q/topic:bug/1718226
17:14:13 <gibi> and now tight to a bug instead of a bp
17:14:53 <mriedem> yeah, that's on my list,
17:14:56 <mriedem> but so are lots of things
17:14:58 <gibi> cool
17:15:12 <gibi> I mean that it is cool that is on the list :)
17:15:18 <gibi> anyhow moving on
17:15:48 <gibi> I've just realized that the our transformation burndown chart needs to move to openshift 3
17:15:58 <gibi> and openshift 3 does not have cron jobs in the free account
17:16:09 <gibi> so I actually have to find another free hosting
17:16:32 <gibi> it means that the burndown might be unavailable after 30th for a while
17:16:52 <mriedem> ah
17:17:06 <mriedem> maybe ask sdague what he used for the api-ref burndown chart?
17:17:16 <gibi> I can do that
17:17:18 <mriedem> it might have been his own server though
17:17:33 <mriedem> another option is maybe openstack-infra has ideas for hosting
17:17:41 <gibi> I could use my raspberry but that only has dynamic dns and that is filtered in some part of the world
17:18:18 <gibi> I will ask sdague and keep the raspberry as option B
17:18:51 <gibi> OK, moving on
17:19:15 <gibi> last item on my list is Takashi's comment on https://review.openstack.org/#/c/475860/
17:19:48 <gibi> in short I'm proposing to use common json pieces for the notification samples to avoid duplications
17:19:54 <gibi> but it has a side effect
17:20:42 <gibi> the common json sample fragment might not what we want to show in the doc as sample
17:21:09 <mriedem> ah yeah, that could get confusing...
17:21:15 <gibi> as it might not 100% a real sample becuase some of the parts are replaced during the test execution only
17:21:50 <gibi> I'm still in the process of thinking about possible solutions
17:22:01 <gibi> one is to store the replacement as file as well
17:22:27 <gibi> and applied the those replacements not only in the test execution but also during the doc generatiion
17:22:43 <gibi> this means more files :/
17:22:46 <mriedem> that might be more trouble than it's worth
17:22:51 <gibi> yepp
17:22:51 <mriedem> and it's confusing and complicated
17:22:53 <mriedem> for new contributors
17:23:58 <gibi> basically we now have 80 lines of duplicated sample for 8 lines of difference
17:24:07 <gibi> for each event_type
17:24:25 <gibi> so I still want to reduce that duplication if possible
17:24:42 <gibi> but I'm a bit stuck about the how
17:25:18 <mriedem> can you remove the problem fields from https://review.openstack.org/#/c/452819/11/doc/notification_samples/payloads/InstanceActionPayload.json ?
17:25:24 <mriedem> like task_state, power_state, vm_state?
17:26:05 <mriedem> or, provide a way to override what's in the template in https://review.openstack.org/#/c/475860/5/doc/notification_samples/instance-power_off-end.json ?
17:26:10 <gibi> hm, that mean I have to merge json fragments instead of combining them. but most probably doable
17:26:14 <mriedem> or would those show up twice?
17:26:47 <gibi> we anyhow have our own jsonref implementation so we can tweak that not to duplicate fields
17:27:02 <gibi> and then we should not call it jsonref any more :)
17:27:15 <mriedem> gibiref
17:27:19 <gibi> lol
17:27:36 <gibi> I feel that this could work. thanks for the idea, I will dig into it
17:28:14 <gibi> that was my last item for today. Do you have things to discuss?
17:28:42 <mriedem> nope
17:28:50 <mriedem> just trying to do some scale testing for dan's instance list changes
17:28:55 <mriedem> so we have a benchmark
17:29:13 <gibi> I'm wondering if we want to make that a ci job for some performance testing
17:29:33 <mriedem> there is or was a rally ci job at one point
17:29:36 <gibi> it can even uncover bugs that only apperas if lot of instances booted in parallel
17:29:50 <mriedem> and tempest used to have the largeops job that would create 300 instances at a time, 3 times in a row
17:30:05 <mriedem> but we didn't want either as voting since they were too unpredictable
17:30:24 <mriedem> what we'd really need is something that runs periodically and posts results so we can see trends
17:30:45 <gibi> I guess your trial with fakedriver is more predictable as it doesn't really spawn instances
17:30:46 <sdague> it was on my own server, that being said, I'd be fine hosting something else if we have pattern here for the burndowns
17:31:23 <gibi> sdague: thanks for the offer.
17:31:45 <gibi> the notification burndown is a fork of your api ref burndown
17:32:11 <gibi> it is a static html + some json data generated periodically
17:32:25 <gibi> https://github.com/gibizer/nova-versioned-notification-transformation-burndown
17:34:12 <gibi> it has some glue stuff for openshift 2 but that will be useless after end of this month
17:36:14 <gibi> anyhow I think we can sort out the details outside of this meeting
17:36:39 <gibi> if nothing else then I will close the meeting
17:37:26 <gibi> then thanks for the discussion
17:37:30 <gibi> #endmeeting