14:00:34 <witek> #startmeeting monasca 14:00:35 <openstack> Meeting started Wed Nov 15 14:00:34 2017 UTC and is due to finish in 60 minutes. The chair is witek. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:00:36 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:00:38 <openstack> The meeting name has been set to 'monasca' 14:00:50 <witek> Hello everyone 14:00:50 <dougsz> hi all 14:01:10 <fouadben> hi all 14:01:43 <witek> small group today 14:01:52 <sc> hi 14:02:23 <witek> let's start then 14:02:33 <dougsz> I added point 1. to the agenda 14:02:40 <dougsz> I'm new to the monasca project 14:02:46 <witek> #topic value_meta in alarms 14:02:55 <witek> hi Doug 14:03:01 <dougsz> Hi Witek 14:03:12 <witek> nice to see StackHPC again :) 14:03:18 <dougsz> :) 14:03:50 <witek> https://storyboard.openstack.org/#!/story/2001282 14:03:57 <dougsz> So this one is about alarms on log metrics 14:04:14 <dougsz> And how it could be helpful to include some value_meta in the alarm notification 14:04:49 <dougsz> So for example, for log metrics, that value meta could be the first line of a log message 14:05:08 <witek> yes, it could be useful in other use cases as well 14:06:02 <witek> at the moment the information gets 'lost' in monasca-thresh 14:06:09 <dougsz> That's right 14:06:20 <akiraY> o/ 14:06:36 <dougsz> I had a brief look at not dropping it and it looks to involve adding another table in the DB 14:06:55 <dougsz> Similar to the existing metric dimensions table 14:07:35 <dougsz> I guess, if people think the story is worth pursuing I could write a spec with some more detail 14:08:11 <witek> I think it should be enough to extend the alarm-state-transition message 14:08:28 <witek> to include value and value_meta 14:09:11 <witek> and then monasca-notification could use it to create the notification message 14:09:28 <dougsz> sounds good 14:09:49 <dougsz> so you don't think modifying the DB is required? 14:10:27 <witek> alarm-state-transitions are persisted to InfluxDB 14:11:02 <witek> we might have to modify persister 14:11:02 <dougsz> ah sure - i was referring to the MySQL DB 14:11:49 <witek> no, I don't think we have to touch MySQL 14:12:13 <dougsz> Ok thanks - I'll go back and have a closer look. 14:12:29 <witek> in the story you write about extending MetricDefinition 14:12:41 <witek> I don't think it's the right way 14:13:01 <dougsz> Ok, thanks for the pointer. 14:13:40 <witek> I think it's a good change, it helps to make the notifications more customisable 14:13:48 <nseyvet> hi, sorry a bit late. 14:13:56 <witek> hi nseyvet 14:14:24 <dougsz> Thanks witek - I believe rbrandt had a use case for it as well. 14:15:33 <witek> persister has already 'reason_data' but I think it's not used 14:16:06 <witek> dougsz: if you could write a spec for this, that would be great 14:16:39 <dougsz> witek: Sounds good to me. I'll try and come up with something. 14:16:48 <witek> thanks 14:17:00 <dougsz> np - thanks for taking a look. 14:17:49 <witek> is anyone from HPE who could comment? 14:18:14 <witek> Roland is out of office this week I guess 14:19:07 <witek> #topic Cassandra update 14:19:16 <jgu> Hi witek 14:19:23 <jgu> that's me and Scott 14:19:26 <witek> hi James 14:19:36 <witek> and Scott :) 14:20:20 <witek> thanks for the test framework 14:20:21 <jgu> https://storyboard.openstack.org/#!/story/2001292 we have completed the test plan: mix of 1mil fixed "admin" metrics and 3 mil active guest vm metrics that rolls over 10% at each 1 bil measurements/run). Throughput: 85k metrics/second with Cassandra one a three baremetal node cluster 128GB RAM, 48 CPU, three disk each. Does the number sound good enough? 14:20:31 <jgu> witek: it was fun 14:21:03 <witek> so you have tested Java persister, right? 14:21:09 <jgu> yes. 14:21:25 <witek> do you have results for Python as well? 14:21:39 <jgu> not yet due to hardware availability issue 14:21:55 <jgu> now we are wrapping up Java, we can switch to Python 14:22:10 <jgu> if that's needed 14:22:35 <witek> how do you generate the set of metrics for test scenario? 14:23:01 <jgu> I added some details in the storyboard: https://storyboard.openstack.org/#!/story/2001292 14:23:23 <jgu> we use JMeter with kafka plugin to generate kafka metric messages 14:23:39 <jgu> so we tested kafka+persister+cassandra 14:24:00 <witek> OK, so two test plans are included in the tool already 14:24:09 <jgu> yes 14:24:26 <nseyvet> pretty nice!! 14:24:38 <jgu> nseyvet: thanks! 14:24:39 <nseyvet> I saw the jmeter files come in. 14:24:54 <witek> the requirement for the env is to have Kafka, Persister and TSDB installed, right? 14:25:07 <jgu> yes 14:25:31 <akiraY> wonderful! 14:25:44 <jgu> there is a small change in persister is required for the tool to correctly count the throughput 14:26:06 <witek> is it documented somewhere? 14:26:18 <jgu> it is in the same code patch 14:26:21 <jgu> :-) 14:26:52 <akiraY> :D 14:27:03 <witek> we would need the same for Python? 14:27:03 <jgu> it changes the flush meter to reflect the metrics persisted, bot the numbe rof batches 14:27:30 <jgu> witek: ah yes. we haven't got to the same change for Python yet 14:28:02 <witek> fine for now 14:28:26 <jgu> when we test python, will need it. It's a very small and easy change 14:28:37 <jgu> thanks for pointing that out 14:28:40 <witek> with how many persister instances have you tested? 14:28:50 <jgu> three persister instances 14:29:19 <witek> nice job, thanks for that 14:29:36 <jgu> kafka/zookeeper, persister and tsdb are colocated on 128GB, 48 COPU node 14:30:02 <jgu> s/COPU/CPU/ 14:30:16 <witek> we should try to get Cassandra changes merged soon 14:30:28 <witek> reviews are very welcome 14:30:32 <jgu> I was hoping so :-) 14:31:19 <witek> do you need help on zuul configuration? 14:31:28 <jgu> scott? 14:31:50 <jgu> sgrasley is working on the zuul gate 14:31:55 <sgrasley> Yes please 14:33:03 <witek> I guess we can take care of it 14:34:19 <witek> anything else on that topic? 14:34:29 <jgu> nope 14:34:32 <jgu> thanks Witek 14:34:39 <witek> thanks for the code 14:35:17 <witek> #topic new meeting time 14:35:19 <jgu> thanks for reviewing the code in advance 14:35:54 <witek> would you like to make a survey for the new meeting time? 14:36:17 <witek> I have prepared a doodle 14:36:29 <witek> https://doodle.com/poll/s5fwqtu7ik898p57 14:36:30 <jgu> I like doodling :-) 14:37:04 <witek> times only for Wednesday, but we're free to change that as well 14:37:23 <nseyvet> which TZ are those hours? 14:37:34 <witek> should be in your local time 14:37:46 <jgu> do we choose one or multiples? 14:38:19 <jgu> never mind -- see the colors 14:38:33 <fouadben> done 14:38:35 <witek> nseyvet: I think you can set it yourself, my doodle display UTC 14:38:58 <witek> nseyvet: the info is displayed above the table 14:39:28 <jgu> done here too 14:39:52 <witek> I will also send the link to the mailing list as not everyone is here 14:40:18 <akiraY> done 14:40:52 <jgu> witek: thanks for the survey 14:42:18 <witek> I'll share the result early next week on the mailing list 14:42:34 <witek> is it OK? 14:42:47 <nseyvet> yes 14:42:55 <jgu> yes 14:42:59 <fouadben> witek: yes 14:43:21 <akiraY> yes 14:43:37 <witek> #topic Summit update 14:44:04 <witek> last week we've been in Sydney at the OpenStack summit 14:44:30 <witek> we had a hands-on workshop, and 4 presentations 14:44:42 <witek> sc: thanks for help on the workshop 14:45:32 <witek> Monasca was also presented on SUSE and Fujitsu booths in marketplace 14:46:02 <akiraY> :) 14:46:25 <witek> there is one interesting initiative started, self-healing SIG 14:46:48 <witek> https://wiki.openstack.org/wiki/Self_healing_SIG 14:47:05 <witek> it started actually at PTG in Denver already 14:47:41 <witek> the goal is to coordinate the efforts for self-healing OpenStack across different projects 14:48:20 <witek> here the etherpad to the Forum session from Sydney: 14:48:22 <witek> https://etherpad.openstack.org/p/self-healing-rocky-forum 14:48:51 <witek> bi-weekly meetings are planned, and own mailing list 14:49:15 <witek> feedback was requested to define goals and scope 14:49:57 <sc> witek: you're welcome 14:50:55 <witek> I think that's all from my side 14:51:06 <witek> it was nice meeting you in person, akiraY :) 14:51:07 <nseyvet> I do have one question if there is some time? 14:51:14 <witek> sure 14:51:27 <nseyvet> I have a question about the agent <-> monasca-api <-> Keystone and plugins. The agent has a keystone_url config option, but at the same time the libvirt plugin has its own authentication credentials towards Nova for example. How is authentication/authorization architectured from a monasca agent perspective vis a vis Monasca? Should each plugin independently use Keystone or is it centralized by the agent configuration? 14:51:45 <akiraY> o/ 14:52:54 <witek> libvirt is a special case I guess, because it has to collect information from nova service 14:53:32 <witek> I'm not sure if the standard monasca-agent user has enough permissions to get required info 14:54:07 <nseyvet> atm it reads the nova.conf file and uses those credentials. correct? 14:55:06 <witek> aren't credentials configured in libvirt.yaml? 14:55:17 <nseyvet> yes 14:55:36 <nseyvet> I thought the "auto-detect" feature would read from nova.conf 14:55:40 <witek> ok 14:56:11 <nseyvet> but as a general principle, how should it work? 14:56:46 <nseyvet> and what is the purpose of the "keystone_url"? 14:56:59 <witek> in general every plugin should use monasca-agent credentials 14:58:03 <witek> agent gets token from Keystone and sends it for authorisation to monasca API 14:59:33 <witek> I guess I have to close the meeting soon 14:59:37 <nseyvet> ok so the agent acts as a normal OpenStack project and does not rely on MonascaAPI to authenticate it? 15:00:01 <nseyvet> (the www_authenticate_uri config option) 15:00:39 <witek> nseyvet: let's continue on #openstack-monasca 15:00:48 <witek> have to finish now 15:00:50 <witek> bye 15:00:54 <nseyvet> bye 15:00:58 <nseyvet> ty 15:01:03 <witek> #endmeeting