15:00:52 #startmeeting monasca 15:00:52 Meeting started Wed Nov 16 15:00:52 2016 UTC and is due to finish in 60 minutes. The chair is rhochmuth. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:53 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:56 The meeting name has been set to 'monasca' 15:00:56 bye 15:00:57 thanks, cya 15:01:06 o/ 15:01:11 o/ 15:01:14 o/ 15:01:15 hello 15:01:16 o/ 15:01:17 Agenda has been posted at, https://etherpad.openstack.org/p/monasca-team-meeting-agenda 15:01:19 o/ 15:01:19 OMG 15:01:28 :) 15:01:36 Agenda for Wednesday November 16, 2016 (15:00 UTC) 15:01:36 1. Ocata-1 milestone 15:01:36 2. Ocata Community Goals Acknowledgement 15:01:36 1. http://governance.openstack.org/goals/ocata/remove-incubated-oslo-code.html 15:01:36 3. confluent-kafka-python for monasca-common (next to kafka-python) 15:01:53 And the award for the longest agenda every goes to team Monasca 15:01:58 o/ 15:02:12 flowers, fame and glory ... :D 15:02:32 We might as well get started 15:02:43 #topic Ocata-1 milestone 15:02:59 tomorrow is Ocata 1st milestone 15:03:07 should we tag the repos? 15:03:20 Sure, sounds like a good idea 15:03:31 ok, I'll take care 15:03:42 There are already features, like the dimension names/values that are not in the python-monascaclient 15:04:01 which we should get tagged so when devstack builds at least they show up 15:04:07 thanks witek 15:04:49 i guess that is it on that topic 15:04:55 #topic Ocata Community Goals Acknowledgement 15:05:17 we should acknowledge community-wide goals for Ocata 15:05:24 there is one actually 15:05:32 ? 15:05:33 Remove Copies of Incubated Oslo Code 15:05:42 I guess that was a problem only for client 15:05:46 which was already mergde 15:05:48 *merged 15:06:13 so, i'm not aware of any copies of oslo code anymore 15:06:23 so, are we good on that one? 15:06:32 is there a link i should be referring too? 15:06:42 ok, I'll update the wiki page then 15:06:47 AFAIK we are, according to announcement only client was listed by OS as sth we need to fix 15:08:25 yes, it seems we're ok on that one 15:08:54 is there something we need to address with the announcemtn 15:08:57 i didnt' see it 15:09:13 http://governance.openstack.org/goals/ocata/remove-incubated-oslo-code.html 15:09:18 there is that wiki page 15:09:52 monasca Planning Artifacts: Completion Artifacts: 15:10:25 i don't see anythign listed 15:10:32 for monasca 15:11:38 is there anything to do, should a completion artificat be listed 15:11:48 or are we basically done 15:11:56 I'm not sure 15:12:14 we could put links to completed reviews 15:12:24 +1 15:12:24 sounds like busy work 15:12:38 there were just 3, right? 15:12:43 yeah 15:13:04 https://review.openstack.org/#/q/status:merged+project:openstack/python-monascaclient+branch:master+topic:goal-remove-incubated-oslo-code 15:15:17 ok, i could update that page, unless you want to witek 15:15:33 you decide :) 15:16:09 i'll look into it 15:16:14 thanks 15:16:15 see what is involved 15:16:34 #topic confluent-kafka-python 15:16:55 is it really like 10X faster? 15:17:04 according to the document 15:17:33 i would be all for an upgrade 15:17:34 at least in single node conf 15:17:35 we did not move with investigating/writing any code, we wanted to clarify if monasca would be interested in this 15:17:45 yes interested 15:18:00 need to have joe look at this 15:18:11 it is not listed in global-requirements 15:18:19 he's been measuring and followign the client that we are using 15:18:33 we don't have that much time to work on this, but code is WIP to make monasca-common using that library next to exisiting kafka-python 15:18:35 the main reason for not upgrading is that they had some issues 15:18:46 everything would depend on actual kafka library installed 15:19:00 so the transition if any would be smooth and not enforced 15:19:18 plus all would end with new gate job for monasca plugin that would stack up everything with confluent-kafka 15:19:20 so, could back out if there were issues? 15:19:27 that's the rough plan for this 15:19:44 actually you wouldn't need to back up 15:19:54 have you don't any analysis of the api or persister 15:20:04 no, not yet 15:20:05 i'm assuming you want this for both metrics and logging 15:20:18 python persister is 3x slowlier than java 15:20:38 doing this in monasca-common would have potential impact (good one) on every project using that 15:21:03 so, are we using monasca-common for kafka everywhere at this point? 15:21:37 ehm...I thought so, actually now that you asked I am not really sure 15:21:48 log-api is using that for sure 15:22:09 joe was running into crashes with the latest kafka client 15:22:15 not sure if those have been resovled 15:22:21 but this would obviousely bypass that 15:22:56 would you want to update global requirements? 15:23:06 we might run into problems there 15:23:08 with confluent-kafka ? 15:23:13 correct 15:23:25 that might be problematic, because it requires compiling one C-library 15:23:28 my assumption is that they won't want two kafka libraries 15:23:28 rdkafka 15:23:32 what are the requirements to get added to global-requirements? 15:24:02 add change to one infra projet and see how it goes ? :D 15:24:21 usually nice well supported python code which is properly licenses 15:24:28 and doesnt' compete with an existing library 15:24:49 industry adoption 15:24:49 the second one is not the case here 15:25:00 performance would be another criteria 15:25:10 so, we got ujson in 15:25:12 it is also relatively new 15:25:21 which compete with json and simplejson 15:25:34 both are based on Apache, but I would not go that far right now, I mean first we should clarify if the API of the confluent-kafka-python meets all the requirements that monasca presents 15:25:36 so, it might be accepted based on the performance analysis 15:25:44 but i expect it will be questioned 15:25:44 performance on one side, but the capabilities on the other 15:25:57 right 15:26:25 so, i'll let you guys drive that 15:26:39 i wasnt' looking at reviews for a few days 15:26:45 so, little behind 15:26:56 ok, guess now we will need to discuss that internally and give feedback from your feedback and plan that somehow 15:27:20 sure 15:27:26 shoudl we move on 15:27:30 +1 15:27:39 #topic https://review.openstack.org/#/c/366024/ 15:27:52 migrate devstack to xenial 15:28:25 there are vagrant problem. 15:28:28 one of our (mine, Witek) folks tested this recently for both Java&Python 15:28:38 it stacks up with workaround 15:28:52 what was the work-around? 15:29:16 shinya_kwbt: problem you have can be solved with starting up machine without provision and than creating one snapshot 15:30:09 I solved with that 15:30:11 1. vagrant up --no-provision && vagrant halt 2. open virtualbox gui 3. open target vm settings and change storage controller from SCSI to SATA 4. vagrant up 15:30:11 so instead of vagrant destroy -f ; vagrant up for clean install, you would do vagrant sandbox rollback ; vagrant provision 15:30:33 I have also tested with bento box, but it exited on smoke tests 15:30:38 assuming using vagrant-sahara 15:30:56 witek: me either 15:31:53 not vagrant environment works fine so I think tempest-gate may work 15:32:16 I assume so too, looking at other projects that already moved to xenial 15:32:38 @witek actually created a change that would result in NV gate for ubuntu-xenial 15:33:06 we would see any issues with that prior to merging shinya_kwbt change 15:33:37 https://review.openstack.org/#/c/397789/ 15:35:00 so, that review looks like is should be merged 15:35:09 as it is experimental 15:35:14 yes 15:35:23 for agent, non-voting 15:35:29 then we can see if shinya's merge works 15:35:31 the rest experimental 15:35:45 I hope it works this way :) 15:35:46 rhochmuth: thanks 15:35:48 then we can see if shinya's review works 15:35:58 and if it does, it can be merged 15:36:12 then can change from experimental 15:36:20 correct 15:36:31 although Vagrant will still be partly broken 15:36:38 with xenial 15:37:02 after shinya's change merges if i understand correctly 15:37:13 Yes virtual box SCSI controller has problem. Trusty image has SATA controller but xenial has SCSI. 15:37:32 those bastards 15:37:40 vagrant is still an issue, but I think we can merge it if gate tests pass 15:37:45 well we could try and try to solve that to make vagrant work on trusty still, I have at least one idea for that 15:37:48 or two 15:37:49 :-) 15:37:50 ;-) 15:38:15 thanks tomasz 15:38:35 so, i'm going to put a +1 on your experimental review witek 15:38:45 then we can watch what happens next 15:38:50 don't thank me now...not really sure if I will find any time to devote any time to that 15:38:52 thank you 15:39:05 welcome 15:39:07 done 15:39:29 thanks witek and shinya! 15:39:37 adn tomasz and everyone 15:40:24 thanks tomasz witek 15:40:27 #topic https://review.openstack.org/#/c/384128/ 15:41:09 tomasz, that is you 15:41:22 Granular logging control 15:41:26 it just to remind to take a look, already have monasca-api and monasca-log-api working with new logging configuration, so it would just supplement what has been already done in that topic 15:42:04 agent and notification components are not that easy to refactor it to work with oslo.log 15:42:21 ok, looks fine to me 15:42:29 not to mention that it would be like revolution there instead of simple refactor 15:42:32 but i should look at it a little closer 15:42:37 np 15:42:55 #topic https://review.openstack.org/#/c/391576/ 15:43:18 also me 15:43:28 it is failing 15:43:39 but, i think it is a good idea 15:43:43 yeah and it will, because it requires gate change 15:43:55 this is not like zookeeper change I posted sometime ago 15:44:02 zookeeper is builtin inside the devstack 15:44:09 kafka is a yet another plugin 15:44:22 but if you find it a good idea, I will work on this in my free time 15:44:53 the only issue i have is that we might want to consdier upgradign to the newest kafka 15:45:15 0.10.1 or later 15:45:39 we might go with another gate with would be NV and use newer kafka 15:46:08 luckily version of kafka is part of plugin/settings so can be overidden in gate setup 15:46:21 plugin also specify such variables AFAIK 15:46:43 well, in general then i think it is a good idea 15:46:50 what other projects are using kafka? 15:46:53 do you know 15:47:02 no idea 15:47:04 :( 15:47:17 the problem is that the kafka releases are incompatible 15:48:33 so, i think you are approved tomasz to proceed if you want to 15:48:38 i can't see any problems 15:48:46 seems like we are covered 15:48:56 if we want to upgrade 15:49:21 i will figure sth out ;), at least working on using plugin in monasca and/or new gate 15:49:34 ok, thanks 15:49:49 #topic https://review.openstack.org/#/c/395246/ 15:49:57 These two are mine 15:50:01 witek: the same gate should be done also to the log-api devstack jobs 15:50:05 Just trying to get some attention on them 15:50:31 We're trying to backfill data, but we're blocked by lacking these abilities 15:51:59 so, does anyone have any opinions on those two reviews 15:52:08 yeah, just posted one ;d 15:52:09 i thought about the backfill problem 15:52:33 wasn't sure if this should just be a admin feature 15:53:00 It's not something we would want to expose to customers 15:53:04 was wondering since it isn't in python, if it shoudl be added 15:53:11 Although we could tie it to a brand new role 15:53:32 "backfill" role 15:53:45 sounds a bit combersome 15:53:55 complicated 15:53:56 tomasztrebski: I'm not sure I follow your comment on that patch, can you elaborate? 15:54:08 AFAIK, we should not go into role mores, I would consider moving to policies...I know that is a lot of work but there is a reason why there's a dedicated oslo library for that 15:54:19 rhochmuth: Yeah, we just went with the cleanest solution for the moment. 15:54:50 yeah, the problem is that java code doesn't support oslo 15:55:06 ahhh....java ;/ 15:55:34 but getting the python code to implement the rbac like the rest of openstack would be good 15:55:37 been on the list forever 15:55:52 so in case of +1 for policies that would require extra work for java 15:56:09 anyway, i thought about this a bit 15:56:13 whether the right direction 15:56:25 i can't see any problems with what is being proposed in the review 15:56:48 and, it sounds like we don't want to do anything with the python code for this case 15:56:53 right? 15:57:32 Not for the moment anyway. 15:59:28 should we move to #openstack-monasca? 15:59:43 we are at the end of the hour slot 15:59:57 I would prefer to move the rest to next week 15:59:58 uh...I won't be able to join, got to go out 16:00:02 me too 16:00:10 i won't be around next week 16:00:11 what can be covered in gerrit, let's cover it there 16:00:24 ok, see you in two weeks 16:00:33 bye 16:00:37 #endmeeting