15:00:10 <rhochmuth> #startmeeting monasca
15:00:11 <openstack> Meeting started Wed Jan 13 15:00:10 2016 UTC and is due to finish in 60 minutes.  The chair is rhochmuth. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:14 <rhochmuth> o/
15:00:14 <openstack> The meeting name has been set to 'monasca'
15:00:19 <bklei> 0/
15:00:22 <bklei> o/
15:00:23 <bmotz> o/
15:00:41 <witek> hello
15:00:45 <rhochmuth> Agenda is at, https://etherpad.openstack.org/p/monasca-team-meeting-agenda
15:00:47 <shinya_kwbt> o/
15:00:53 <qwebirc46365> Hello
15:00:54 <rhochmuth> Agenda for Wednesday January 13, 2016 (15:00 UTC)
15:00:54 <rhochmuth> 1.	Outdated changes, should they be abandoned or taken care of ?
15:00:54 <rhochmuth> 1.	https://review.openstack.org/#/c/234449/
15:00:54 <rhochmuth> 2.	https://review.openstack.org/#/c/150620/
15:00:54 <rhochmuth> 2.	Healtcheck approach - decision, based on https://review.openstack.org/#/c/249685/, check the last comment
15:00:54 <rhochmuth> 3.	Pull requests for ansible roles
15:00:54 <rhochmuth> 4.	Alarm count resource, https://review.openstack.org/#/c/257607/
15:00:55 <rhochmuth> 5.	Sorting alarms, https://review.openstack.org/#/c/260697/
15:00:55 <rhochmuth> 6.	Enhance dimension filtering, https://review.openstack.org/#/c/266509/
15:00:56 <rhochmuth> 7.	monasca-log-api:
15:00:56 <rhochmuth> 1.	Security update, https://review.openstack.org/#/c/256404/
15:00:57 <rhochmuth> 8.	Other reviews
15:01:07 <rhochmuth> Indenting got lost in that agenda
15:01:22 <rhochmuth> hi qwebirc46365
15:01:43 <witek> :)
15:01:46 <ddieterly> o/
15:01:47 <rhochmuth> So, there is a lot to go through today
15:01:55 <rhochmuth> mainly reviews
15:02:01 <fabiog> hi
15:02:18 <rhochmuth> some have been sitting there for a while due to holidays and other things
15:02:28 <rhochmuth> so, if someone has more agenda items
15:02:33 <rhochmuth> please add to the list
15:02:52 <rhochmuth> #topic outdated changes
15:03:03 <rhochmuth> https://review.openstack.org/#/c/234449/
15:03:30 <rhochmuth> So, I don't think that the original author will be resolving that
15:03:38 <rhochmuth> he is no longer on the project
15:03:54 <rhochmuth> however, it looks like he fixed a bug
15:04:07 <rhochmuth> we were waiting on unit tests
15:04:13 <rhochmuth> but they were never completed
15:04:30 <rhochmuth> i think someone on the monasca project will need to take ownership for this one
15:04:38 <rhochmuth> i don't want to just abandon it
15:04:47 <rhochmuth> so, are there any volunteers
15:05:02 <rhochmuth> if not, i can try and get looked at here
15:05:08 <rhochmuth> but not guarantees
15:05:17 <bmotz> I could have a look
15:05:27 <rhochmuth> thanks bmotz
15:05:34 <witek> I could take a look too
15:05:44 <rhochmuth> thanks witek
15:05:44 <shinya_kwbt> I want try too
15:05:44 <tomasztrebski> me too, but been pretty booked recently, so also can't promise it, maybe at least I will do a review
15:06:05 <bmotz> I'm happy to defer to witek or shinya_kwbt :)
15:06:13 <witek> :)
15:06:23 <tomasztrebski> so many....maybe a person who takes this sooner should leave a comment saying: 'It's mine....do not touch :)'
15:06:24 <shinya_kwbt> :)
15:07:19 <rhochmuth> ok, i'll get bmotz and witek figure it out
15:08:27 <rhochmuth> So, the next review that could use some attention is, https://review.openstack.org/#/c/150620/
15:08:41 <rhochmuth> this one was also submitted by a deloper that is no longer working on monasca
15:09:01 <tomasztrebski> we had a discussion there but the change is over 3 months old and I am pretty much not sure if it fits anymore or if is needed anyway
15:09:29 <rhochmuth> so, do we want to abandon this one
15:09:35 <rhochmuth> it isn't too important
15:09:46 <ddieterly> yea, let's abondon it
15:09:57 <fabiog> +1
15:09:59 <bmotz> +1
15:10:00 <tomasztrebski> +1
15:10:07 <tomasztrebski> actually...-2 ;D
15:10:14 <rhochmuth> lol
15:10:33 <tomasztrebski> just a quick joke...;-)
15:10:38 <rhochmuth> OK, "abandon" button hit
15:10:46 <rhochmuth> review is now abandoned
15:10:57 <rhochmuth> that was easy
15:11:05 <slogan621> too easy :-)
15:11:15 <rhochmuth> #topic healthcheck
15:11:24 <rhochmuth> tomasz, i think this is you
15:11:42 <rhochmuth> https://review.openstack.org/#/c/249685/
15:12:00 <rhochmuth> i couldn't tell from the comments, exactly what the proposal was
15:12:01 <tomasztrebski> basically I've made an investigation some time ago and would love to get some feedback for the last comment in that change
15:12:44 <tomasztrebski> one thing was not established or covered (doing it as filter via oslo.middleware or separate endpoint)
15:12:47 <rhochmuth> i'm not sure i understand the comments
15:13:14 <rhochmuth> can your provide more details and discuss
15:13:56 <tomasztrebski> basically, it is possible to run embedded gunicorn with healthcheck only, but it has to be done in separate process (otherwise actual API can't start)
15:14:11 <tomasztrebski> if done that way, turning everything off is problematic
15:14:45 <tomasztrebski> there's a bunch of exceptions coming from multiprocess library
15:14:48 <rhochmuth> by embedded gunicorn, does that mean gunicorn gets included in teh api, as an import?
15:15:38 <tomasztrebski> it means that you basically run another WSGI app from within API by having a hook on pre or post start event from gunicorn
15:16:26 <tomasztrebski> for me it just look really cumbersome and makes everything hard to understand
15:17:32 <rhochmuth> so, do you want to add this, or is your recommendatino to not add
15:18:20 <tomasztrebski> I think it is worth adding, it accomplishes one thing, if healthcheck cannot respond that means API is down
15:18:27 <fabiog> tomasztrebski: why not just having an api endpoint in the same process, e.g. /v2/healthcheck?
15:19:01 <fabiog> tomasztrebski: so a LB can ping that endpoint periodically
15:19:33 <fabiog> tomasztrebski: it will be better than the middleware that is always performed for every request
15:20:15 <tomasztrebski> I think adding a separate endpoint is actually a way to go, it has not been implemented in this change, because I was waiting to cover it at the meeting
15:20:28 <tomasztrebski> fabiog seems to enjoy the idea
15:20:31 <tomasztrebski> ")
15:20:33 <tomasztrebski> :)
15:20:51 <rhochmuth> i was just looking at the comment stream
15:20:52 <tomasztrebski> and basically one advantage, if you think about it, is that we have more control over what we want to return
15:21:11 <rhochmuth> i think txv had some comments related to trying to run this seperately
15:21:15 <fabiog> tomasztrebski: yes, this is simple and it avoids that you think the API is healthy because a "ghost" process is the only thing is running
15:21:21 <tsv> tomasztrebski, fabiod: I like the idea of separate endpoint for healthcheck too, as that would give a good logical separation
15:21:45 <tomasztrebski> because in filter (at least with oslo.middleware) you know nothing about HTTP method, so no possibility to run lean healthcheck for HEAD or more complex for GET requests
15:21:46 <witek> endpoint +1
15:21:48 <rhochmuth> so, it sounds like we are reaching concensus
15:22:02 <fabiog> tsv: separated means a new URL that runs in the same process of all the other URLs
15:22:26 <rhochmuth> basically, we would add a new "resource" in falcon terminology for healtcheck
15:22:26 <tomasztrebski> hmm, so it seems like I got it all wrong :/
15:22:41 <rhochmuth> i don't think you got it wrong
15:22:52 <tsv> fabiog: sure, but a separate endpoint could be turned on/off with different access control too right ?
15:22:57 <rhochmuth> i think you did a lot of the analysis and research
15:23:05 <rhochmuth> and it didn't work out in the end
15:23:09 <rhochmuth> that's ok
15:23:11 <tomasztrebski> ;-)
15:23:23 <rhochmuth> tsv correct
15:23:26 <fabiog> tsv: yes, only some users can access that
15:23:37 <fabiog> tsv: for instance services like LB
15:23:40 <rhochmuth> but we don't have rbac either
15:23:59 <tsv> ok
15:24:03 <rhochmuth> so, how about for now, we do the simple thing
15:24:12 <tomasztrebski> meaning to say ?
15:24:21 <fabiog> rhochmuth: I will work on RBAC for python ... don't know when .. but I did it before :-)
15:24:38 <rhochmuth> thanks fabiog
15:25:02 <rhochmuth> so, tomasz, i think you are all clear to do the simple thing and add a healtcheck resource
15:25:16 <fabiog> +2
15:25:22 <tomasztrebski> I think I can live with that approach :D [+2]
15:25:30 <rhochmuth> all in favor +2
15:25:42 <ddieterly> +2
15:26:02 <rhochmuth> are there any other openstack services that implement healtcheck resources
15:26:02 <bmotz> +2
15:26:05 <tsv> +2
15:26:26 <rhochmuth> just trying to understand a little more if there is any prior method in place
15:26:37 <slogan621> seems like it might be of general utility
15:26:46 <tomasztrebski> gosh, I though I was trying to figure that one out and by that I've found oslo.middleware
15:26:57 <rhochmuth> ahh, i see
15:27:04 <slogan621> something you might find in, say, oslo
15:27:27 <tomasztrebski> it has some built-in mechanism that just plug-in in configuration and here is goes
15:27:40 <tomasztrebski> + base class to create new healthchecks
15:27:46 <slogan621> nod
15:28:22 <rhochmuth> well, at the risk of not doing what has been done before
15:28:32 <rhochmuth> i think we've all approved the new healthcheck
15:28:49 <rhochmuth> sounds like we should continue with the next topic
15:28:52 <rhochmuth> thanks tomasz
15:29:07 <fabiog> rhochmuth: tomasztrebski: there are discussions of getting rid of WSGI in openstack, if that is the case our solution will be longer term
15:29:38 <rhochmuth> #topic pull requests for ansible roles
15:30:03 <rhochmuth> tomasz, is this also yours?
15:30:11 <witek> there is a bunch of pull requests for ansible roles which are waiting some time already
15:30:16 <witek> mine
15:30:55 <rhochmuth> can you send me a link of the repos, and i'll take a look
15:31:07 <rhochmuth> sorry, i've lost track this past 4 weeks a bit
15:31:08 <tomasztrebski> \notice fabiog do you mean, that WSGI is to be replaced with something something else
15:31:22 <witek> now, or offline?
15:31:29 <rhochmuth> offline
15:31:32 <witek> ok
15:31:36 <witek> thanks
15:31:37 <rhochmuth> thanks
15:31:59 <witek> some tips for future to make the process better?
15:32:12 <fabiog> tomasztrebski: I heard rumors that they want to change, but for now it is not clear with what and when
15:32:42 <rhochmuth> witek: just ping us directly if no one is looking at your changes
15:32:47 <rhochmuth> soon enough
15:33:01 <witek> is mailing list a good place?
15:33:04 <rhochmuth> the ansible repos are managed outside of gerrit
15:33:11 <rhochmuth> sure, that would work too
15:33:20 <rhochmuth> that is probably better
15:33:33 <witek> nice
15:33:41 <witek> ok, that's all
15:33:48 <rhochmuth> thanks
15:33:55 <rhochmuth> #topic alarms count resource
15:33:56 <rhochmuth> https://review.openstack.org/#/c/257607/
15:34:20 <rhochmuth> so, rbrandt has been busy adding some new resources and query parameters
15:34:53 <rhochmuth> the alarms count resource adds the ability to get the counts of alarms in various conditions
15:35:10 <rhochmuth> as well as filter them various ways
15:35:37 <rhochmuth> the main usage is on overview/summary pages
15:35:47 <rhochmuth> previousely, you would need to query and get all alarms
15:36:00 <rhochmuth> and then do all your own grouping and counting client side
15:36:12 <rhochmuth> with 10,000s of alarms the performance was dropping
15:36:39 <rhochmuth> and if paging needed to be done that would further increase the latency
15:37:02 <rhochmuth> so, i'm just soliciting feedback for rbrandt
15:37:11 <rhochmuth> i've looked at the code and done some testing on it
15:37:20 <rhochmuth> so left to my own, i would approve
15:37:36 <rhochmuth> but wanted to make sure everyone new what was in provess and agreed with the changes...
15:38:10 <fabiog> rhochmuth: I know it would be a bigger change, but wouldn't it be better to dynamically keep the count when alarms are created or deleted or fire?
15:38:13 <rhochmuth> there is also a related question about hibernate support too, and whetehr that is necessary for approval
15:38:26 <fabiog> rhochmuth: and then it will be a really simple and fast query
15:38:50 <rhochmuth> keep the state in the api?
15:39:24 <rhochmuth> i don't think that is going to necessarily work well
15:39:24 <ddieterly> maybe we need a count for all resources?
15:39:47 <rhochmuth> the queries that you would like to apply aren't known ahead of time
15:39:55 <rhochmuth> so, you would have to keep everytyhing in memory
15:39:56 <fabiog> rhochmuth: no, I got it I think it works as I was expecting
15:40:22 <rhochmuth> ddieterly, a count resource would be useful on other resources too
15:40:23 <tomasztrebski> I'd prefer querying DB, at least you always make sure that in given point of time returned number reflects the reality
15:40:28 <rhochmuth> but right now, we are trying to limit
15:40:34 <rhochmuth> the amount of work
15:40:53 <rhochmuth> tomasz: i agree
15:40:55 <ddieterly> sure, be for future, it would be good to keep the resources consistent
15:42:17 <rhochmuth> so, my goal is to review this change, and assuming some other reviewers +1, then I would like to get this merged in this week
15:42:34 <rhochmuth> or as soon as all issues are resoloved
15:43:05 <rhochmuth> There is the related rewivew at, https://review.openstack.org/#/c/260697/
15:43:33 <tomasztrebski> one thing was not answered - is hibernate implementation needed to approve that ?
15:44:00 <rhochmuth> i would say no
15:44:33 <rhochmuth> that would leave this functionality unsupported if hubernate is use
15:44:36 <rhochmuth> is used
15:44:59 <rhochmuth> i'm assuming that all the existing resources query parameters, …, would wrok
15:45:09 <rhochmuth> but the new functionality would be unsupported
15:45:39 <rhochmuth> so, if a query to the counts resourse was done, and hibernate wasnt' implmeneted yet, then it would fail
15:45:49 <rhochmuth> but, i'm assuming that we aren't breaking anything
15:45:55 <witek> have to check if we can plan some resources for that
15:46:16 <rhochmuth> so, are you ok with the above statements
15:46:20 <rhochmuth> that i'm making
15:46:57 <rhochmuth> basically, as far as hibernate, we wouldn't break anything, but new features might not work, until implementatino is completed
15:47:39 <witek> just throw notImplemented at first
15:47:44 <rhochmuth> correct
15:48:00 <rhochmuth> we'll need to test this ourselved
15:48:03 <witek> ok with me
15:48:10 <rhochmuth> awesome, thanks!
15:48:32 <witek> you're welcome :)
15:48:37 <rhochmuth> rbrandt will test to ensure we didn't break it
15:48:39 <shinya_kwbt> I will try to test too.
15:48:52 <rhochmuth> thanks shinya
15:49:16 <shinya_kwbt> :)
15:49:34 <rhochmuth> so, i'm not going to list all the reviews that rbrandt has in flight, but they are all related to the goal of improving the performance in user-interfaces
15:49:52 <rhochmuth> and as a result, they all invovle some new functionality
15:50:23 <rhochmuth> but, we're being careful to not break anything that is already implemented
15:50:33 <fabiog> rhochmuth: also we have the python-client requirements patch that hit another wall. Apparently all the oslo library moved and the update on the keystone client is not enough
15:50:34 <fabiog> https://review.openstack.org/#/c/251674/
15:50:47 <witek> just put us as reviewers for hibernate related changes
15:50:59 <rhochmuth> ok
15:50:59 <slogan621> by user-interfaces you mean horizon/monasca-ui?
15:51:26 <rhochmuth> well, i mean any ui, that ends up using the new resources
15:51:40 <rhochmuth> we are not using any of these new resources in horizon yet
15:51:45 <slogan621> ok
15:51:47 <rhochmuth> but those would be great to add
15:52:36 <rhochmuth> so, we've basically done a lot of the pre-work to enable a lot of improvmeents in the monasca-ui
15:53:15 <rhochmuth> fabiog: yes the liberty branches moved
15:53:29 <rhochmuth> joe keen was telling me that they proabbly resolved a bug
15:53:44 <rhochmuth> in the process they bumped versions on some libraries, either keystone or osl
15:53:47 <rhochmuth> i can't recall
15:53:52 <fabiog> rhochmuth: so do you think that bumping up the versions in the client will not create issues?
15:53:57 <rhochmuth> i believe it was a big bump, like a major version
15:54:16 <fabiog> rhochmuth: I can do a simple test, update and see if jenkins builds it
15:54:20 <rhochmuth> we were hoping to take the miniaml route
15:54:32 <rhochmuth> and bump versions
15:54:41 <rhochmuth> to whatever liberty is at again
15:54:54 <rhochmuth> but, it will take a couple of days is what i was told
15:55:05 <rhochmuth> and need to be prioritzed on my team
15:55:08 <fabiog> rhochmuth: ok, that is not a problem
15:55:20 <rhochmuth> you might touch-base with joe
15:55:30 <rhochmuth> i don't know if you or someone on your team can resolve
15:55:33 <fabiog> rhochmuth: ok, I will
15:55:46 <rhochmuth> joe put in about two weeks prior to xmas, and then this happened
15:55:59 <rhochmuth> fabiog: ok, thanks!
15:56:23 <rhochmuth> sorry about this, but openstack is turning out to be more moving than expected
15:56:48 <rhochmuth> we beleive we need to start making branches, but would like to do that at mitaka
15:57:26 <rhochmuth> tomasz: i don't think we are going to cover your security changes to the log api
15:57:32 <rhochmuth> as we are running out of time
15:57:36 <rhochmuth> i left some comments
15:57:55 <bmotz_> we're definitely quite keen on starting some stable branches at some stage
15:57:58 <rhochmuth> so, hopefully we can resolve y commenting in gerrit and possibly also cover next week
15:58:29 <rhochmuth> bmotz: yes, i think we convinced ourselves we need branches
15:58:43 <rhochmuth> so, for mitaka we'll need to discuss
15:58:49 <tomasztrebski> we can discuss it next week, in the meantime I will cover your comments
15:58:54 <rhochmuth> thanks
15:59:09 <rhochmuth> so, we've run out of time
15:59:19 <rhochmuth> don't forget we are still plannin on remote mid-cycle
15:59:38 <rhochmuth> on wed/thurs feb 3rd and 4th
15:59:45 <rhochmuth> we should start working on an agenda
16:00:03 <rhochmuth> hopefully i got those dates correct
16:00:10 <rhochmuth> ok, we've run out again
16:00:11 <fabiog> rhochmuth: yes
16:00:14 <rhochmuth> thanks everyone
16:00:16 <tomasztrebski> okj, so see you next time ;-)
16:00:21 <fabiog> bye
16:00:24 <rhochmuth> bye
16:00:25 <bmotz_> bye
16:00:34 <rhochmuth> #endmeeting