15:00:29 #startmeeting monasca 15:00:33 Meeting started Wed Jan 20 15:00:29 2016 UTC and is due to finish in 60 minutes. The chair is rhochmuth. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:35 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:37 The meeting name has been set to 'monasca' 15:00:54 o\ 15:00:55 o/ 15:00:59 o/ 15:01:00 o/ 15:01:00 o/ 15:01:01 o/ 15:01:02 hi 15:01:03 o/ 15:01:07 o/ 15:01:12 Agenda for Wednesday January 20, 2016 (15:00 UTC) 15:01:13 1. monasca-log-api[TSV]: 15:01:13 1.1 Discuss adding batching support (v2.0/log/multiple) 15:01:13 1.2 Discuss moving Dimensions to body instead of headers (similar to monasca-api) 15:01:14 2. Translations for monasca-ui (Zanata) 15:01:14 3. Tag/publish latest monasca-agent to pypi? 15:01:15 X. Question for Anomaly & Prediction Engine [ho_away] 15:01:23 Hello everyone 15:01:32 Light agenda today 15:01:35 good morning 15:01:42 please add items at https://etherpad.openstack.org/p/monasca-team-meeting-agenda 15:02:01 First thing I want to cover is that there are a lot of reviews 15:02:05 in progress 15:02:16 I've been working my way though them 15:02:20 sorry about the delay 15:02:27 i could use help 15:02:41 don't every go on vacation 15:02:51 it is painful returning 15:03:14 anyway, things are starting to get back to where they were prior to the holidays in terms of outstanding reviews 15:03:23 i'll try to help if you want, add me 15:03:29 thanks 15:03:34 rhochmuth, I will continue with logging review 15:03:54 rhochmuth: thanks for taking a look at pull requests 15:04:12 witek: yes, i tried, but was unable to get it installed 15:04:19 i've seen 15:04:31 i'll take a look and update 15:04:39 Thanks 15:04:52 So, how about moving onto the first topic 15:05:01 #topic monasca-log-api 15:05:04 tsv you are up 15:05:18 batching support for the api 15:05:33 thanks, we would like to work on adding batching support for the log api, anybody already working on this ? 15:05:53 not, yet, but we need it as well 15:06:26 my team here could get started with that. witek, you ok with that ? 15:06:34 we have created some item in the wiki some time ago https://wiki.openstack.org/wiki/Monasca/Logging#Request_Headers 15:07:02 could you create a short blueprint for that 15:07:04 i was looking at the monasca-api code and looks like it pretty much have everything we need to support batching 15:07:09 sure 15:07:11 perhaps we could split the job 15:07:22 we have to update the agent as well 15:07:39 witek, do we need a separate API for this ? yes I guess 15:08:04 aditional resource in log-api i would think 15:08:29 i think one of the central issue was in handling text logs 15:08:39 how do you know how a newline should be treated 15:09:04 the "multiple" endpoint would treat newline characters as delimters for log messages 15:09:13 that is what i recall 15:09:17 in the case of json 15:09:19 rhochmuth, based on content-type ? 15:09:23 a single vs multi is not required 15:09:41 correct, the content type determines if it is a json or text log 15:10:02 rhochmuth, i like that, that would keep it consistent with metrics API, for example 15:10:47 so in the metrics api you can supply a single metric in the json body 15:10:57 or you can supply multiple arrays as a json array 15:11:06 so is multiple intendet to send multiline log entries or mupltiple log entries? 15:11:09 we could have done something similar in the case of json logs 15:11:20 then we wouldn't have required a new endpoint 15:11:33 however, the problem has been how to handle text logs 15:12:00 i though multiple was to send multiple log lines 15:12:11 do i misunderstand that 15:12:19 that appears to be the way the python api is written 15:12:27 one log entry can consist of several lines 15:12:46 one could also send several log entries in single request 15:12:49 so it is actually a single log entry with multiple lines ? 15:13:42 i guess i'm confused too 15:13:45 we have handled multiline log entries with logstash grok pattern 15:14:02 the python log api in on_post reveices a single request body and then publkishes to kafka 15:14:11 ahhh, i see 15:14:41 so, in that case we don't need the multiple endpoint 15:15:01 witek: but this is not the case of analyzing the log to understand relationship among strings? 15:15:02 i think i misunderstood the api 15:15:34 witek: what I mean is that in the batch log all the lines will be stored as messages in the queue and you can still correlate them and create single entries in ES 15:15:35 so, the "multiple" endpoint would be for handling multiple log files simultaneousely 15:16:30 fabiog: i see, yes, it would be useful to extend api for that 15:17:07 witek: so you have a single api 15:17:31 witek: then, if those lines are correlated is solved when the messages are interpreted and stored to ES 15:17:38 witek: makes sense? 15:18:32 at the moment multiline entries are correlated by logstash in transformer 15:18:52 yes, using patterns. 15:18:59 so for now, based on what i've heard is there any pressing need to add the "multiple" endpoint 15:19:01 but are those sent as single or multiple messages? 15:19:10 in the kafka queue? 15:19:22 agent sends them as single 15:19:27 from what i understand, it is sent as a single 15:20:00 so agent send multiple lines to the api as a text blob 15:20:09 right 15:20:12 the api publishes the same message body to kafka as a single message 15:20:27 rhochmuth: no 15:20:27 logstash does the parsing into multiple log messages 15:20:32 oops 15:20:34 sorry 15:20:36 that is the point rhochmuth 15:20:48 they already treat multi-line as multi-messages 15:20:54 agent sends line by line 15:21:01 so it is a matter of re-conciliate that 15:21:14 agent sending line by line is not going to be performance 15:21:17 so I think the current API can already handle multiple log entries 15:21:17 performant 15:21:19 transformer uses grok to correlate the lines for single log entry 15:21:45 ok, i take back everything i said 15:21:50 if batching is supported by /single, is that good enough then? 15:21:59 the agent sends a single log line to the api 15:22:04 tsv: I think it could be 15:22:10 the api published to kafka the single message 15:22:21 logstash parses it 15:22:31 so, we need to add the multiple endpoint 15:22:35 correct? 15:22:41 yes 15:22:57 and I also see the need for endpoint 'bulk' 15:23:13 rhochmuth: no if logstash can make sense of the multiple messages and understand where a log ends and a new starts 15:23:18 for sending more then one log entries in one request 15:24:05 so, what was wrong with what i said above 15:24:24 can the "single" api handle multiple log messages? 15:24:28 well, if logstash can do that, then you don't need a new api 15:24:29 in a single request 15:24:42 correct, that was my point 15:24:52 a multi-line multiple logs will translate in several single messages in the queue 15:25:05 then is up to logstash to re-construct what messages goes with what 15:25:13 the difference between single vs multiple is there is some delimeter of messages 15:25:31 right? 15:25:41 like a newline character 15:25:45 fabiog, how would a multi-line log message for a single entry be differentiated from multiple log entries for plain text ? 15:25:58 so, why do any parsing in the log api 15:26:03 let logstash handle it all 15:26:12 tsv: well for instance there is no date at the beginning of the second part of the message 15:26:48 rhochmuth: that is what I am trying to understand, if logstash can handle we should have 1 API endpoint, if not then we need 2 15:26:52 fabiog, we don't have any schema for the plain text logs right ? do we ? 15:27:06 fabiog: ok, i agree 15:27:23 tsv: no, but logstash uses a pattern to parse the logs 15:27:33 tsv: we have only json 15:27:35 so you will need to create yours based on the log format you are ingesting 15:28:42 all, why do we need to support plain text then ? could we always expect json payload ? 15:29:31 the api builds the envelope anyway and it would be easy if it has to always handle a json payload ? 15:30:07 i think purse json would make things much simpler too 15:30:17 seems like a separate design session is needed for this topic? 15:30:28 thank you moderator 15:30:32 :) 15:30:37 :) 15:30:46 ddieterly: yeah, maybe would be good as a the mid-cycle topic 15:30:48 you're welcome 15:30:58 alright, let's close on this one today 15:31:09 I would welcome a blueprint on that 15:31:15 i can put together a blueprint for this 15:31:16 we'll have some email followup discussion plan on a session 15:31:20 sure witek 15:31:21 thanks tsv 15:31:34 let's cover in mid-cycle 15:31:42 +1 15:32:03 #topic translations for monasca-ui 15:32:13 i missed the mid-cycle timelines, when and where ? 15:32:27 wed/thurs feb 3rd and 4th 15:32:33 tsv: next wed and thu 7am-12pm PST 15:32:35 is will be remove via webex 15:32:44 two weeks 15:32:44 rhochmuth, faibog, thanks 15:33:02 thanks - how will you circulate webex details? 15:33:54 openstack-dev [Monasca} 15:33:59 maybe add to https://etherpad.openstack.org/p/monasca-team-meeting-agenda as well? 15:34:11 bmotz: bklei yes 15:34:17 I will add the coordinates there 15:34:18 i'll create an etherpad for the agenda 15:34:26 perfect 15:34:30 great, thanks 15:34:36 once we have the page with the agenda 15:35:11 zanata posted a topic on translations 15:35:14 OpenStack uses Zanata for translations 15:35:22 no, it was me :) 15:35:29 ohhh, that isn't a person 15:35:33 sorry 15:35:33 https://wiki.openstack.org/wiki/Translations/Infrastructure 15:36:00 can't you just learn english 15:36:10 :) 15:36:28 yes i should :)- 15:36:29 ok, before i get in trouble again, what is zanata 15:36:52 service to handle translations 15:37:06 OpenStack uses it since Sept. 15:37:20 we could use it for monasca-ui 15:37:43 one has to configure the project in openstack-infra 15:37:59 and jenkins pulls the translation strings every day 15:38:46 it all sounds great to me 15:39:04 as i don't have any experience with this yet 15:39:04 Me too. I want to try translate in Japanese. 15:39:45 so we will push the config change to gerrit 15:39:54 shinya_kwbt: so are you working with witek on this? 15:40:30 witek: sounds good! 15:41:23 ok, sounds like we are all in agreement this is a good idea 15:41:28 O.K. I don't have experience with zanata. But I will listen to other person who often translate. 15:41:36 thanks witek and shinya_kwbt 15:42:22 #topic Tag/publish latest monasca-agent to pypi? 15:42:36 i guess that is another request to apply a tag 15:42:41 i'll do right after this meeting 15:42:44 yes, that's us 15:42:45 sorry about the delay 15:42:46 por favor 15:42:48 np 15:42:59 wasn't sure if there was a reason not to 15:43:10 i'm not aware of any reasons 15:43:32 there have been some changes that you'll want to checkout 15:43:32 cool 15:43:52 for sure, we haven't pulled an agent since October 15:44:16 from what i recall the changes that david schroeder made to vm monitoring are probably the most interesting 15:44:38 he modified vm.host_status and added vm.ping_check 15:44:53 ok, will pull it into lab/test env as soon as you tag/publish 15:45:01 ok 15:45:15 could we tag monasca-log-api as well? 15:45:26 sure, 15:45:36 i'll tag the api and the agent 15:45:57 so, we have around 15 left 15:46:05 we coudl open the floor to any topics 15:46:09 at this point 15:46:20 worry, there was a question around anomaly detection 15:46:33 is ho_away here 15:46:37 thanks! this is first time to join this meeting. i'm really interested in anomaly & prediction engine. now i have a question about the current status and future plan. 15:46:59 so, about a year ago this was an area that i was investing a lot of time in 15:47:09 but, i haven't gotten back to it in a while 15:47:35 what would you like to work on 15:47:44 i read your code and i would like to move it ahead. what i can do for it? 15:47:44 i think monasca provides an excellet platform for building this 15:48:02 i think so 15:48:18 witek, blueprint created: https://blueprints.launchpad.net/monasca/+spec/batching-support-for-log-api 15:48:43 i think there are lot's of areas to work on with respect to anomaly detection 15:48:47 tsv: thanks 15:49:11 it woudl be difficult to get you up to speed on it right now 15:49:23 perhaps a topic for another time or email exchanges 15:49:25 tsv: thanks 15:49:56 rhochmuth: thanks! i will send you email about what i want to do 15:50:03 ok, sounds good 15:50:09 are there other folks interested in this area 15:50:22 wondering if this shoudl be moved to openstack-dev list 15:50:23 rhochmuth: please sign me in :-) 15:50:26 in using it :) 15:50:47 ho_away: sounds like you have some other interest 15:51:11 i would propose discussing in the openstack-dev [monasca] list 15:51:17 unless there is a better alternative 15:51:19 :-) 15:51:22 rhochmuth: +1 15:51:27 i'll need to pay attention to that list better 15:51:57 thanks ho_away 15:52:05 rhochmuth: you can send a meeting invite in the list and people interested can join 15:52:16 fabiog: yes i can 15:52:37 is rbak around? any news from grafana2? 15:52:44 ho_away: what timezone are you in? 15:53:08 fabiog: +9 15:53:21 fabiog: i live in japan 15:53:43 ho_away: ok, so probably early morning is good for you 15:54:04 ho_away: early morning US time 15:54:05 fabiog: thanks! really appriciate it 15:54:25 fabiog: ok 15:54:36 as rbak left just before i asked :) - any news from grafana2? 15:55:40 he's coming 15:55:45 he's back 15:55:45 I'm back 15:56:08 any news on grafana2? 15:56:18 Not much new on grafana. The keystone integration works in that you can log into grafana2 with keystone reds 15:56:50 I'm working on making those creds pass to the datasource so it can use those to authenticate to monasca 15:57:05 That should be the last chunk of work 15:57:47 thanks rbak 15:57:52 is code posted? 15:58:07 No, I keep meaning to do that. 15:58:12 I'll do that this afternoon 15:58:15 thanks 15:58:27 please post to openstack-dev [monasca] list 15:58:42 sounds like tgraichen would like to get involved too 15:58:49 cool - i'll have a look at how to maybe make it keystone v3 ready as soon as its posted somewhere 15:58:58 thanks 15:59:03 and will test it of course 15:59:22 so, i have some actions 15:59:38 let's try to start using the openstack-dev list for correspondence during the week 15:59:47 thanks everyone 15:59:54 thanks, bye 16:00:04 tank you, cheers 16:00:07 thanks 16:00:24 bye :) 16:00:36 #endmeeting