15:00:47 <witek> #startmeeting monasca
15:00:49 <openstack> Meeting started Wed Dec 12 15:00:47 2018 UTC and is due to finish in 60 minutes.  The chair is witek. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:50 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:52 <openstack> The meeting name has been set to 'monasca'
15:00:58 <witek> Hello
15:01:08 <koji_n> hi
15:01:17 <witek> Hi koji_n
15:01:25 <dougsz> hey all
15:01:43 <witek> hi dougsz
15:02:08 <witek> here the agenda for today:
15:02:12 <witek> https://etherpad.openstack.org/p/monasca-team-meeting-agenda
15:02:20 <witek> not much really
15:02:41 <witek> hi joadavis
15:02:46 <witek> good to see you
15:03:01 <dougsz> +1
15:03:18 <joadavis> o/
15:03:21 <witek> let's start
15:03:29 <witek> #topic events-listener
15:03:37 <witek> https://review.openstack.org/583803
15:03:45 <joadavis> Thanks again for the comments.
15:03:48 <witek> we have further discussion on spec
15:04:45 <joadavis> I'll have to catch up. :)
15:04:58 <witek> I'd like to get consensus about how it should work and where should be deployed
15:05:29 <witek> first question is: should it send to Kafka or Monasca API
15:06:38 <dougsz> +1 for Monasca API - I think it's more flexible and the overhead is tolerable
15:07:14 <witek> as joadavis has pointed, we could indeed implement both scenarios
15:07:22 <witek> but we should start with one :)
15:07:40 <joadavis> I'd like to do right to Kafka, but if we think we need to be able to separate the collection of events from the control plane (like in nova cells) we should start with API
15:07:58 <witek> +1
15:08:03 <joadavis> in other words, if API is more flexible we can start there
15:08:44 <joadavis> So the next question - collection in an API service instance or collection in an agent instance
15:08:50 <witek> #agreed first implementation sends events to Monasca API
15:10:30 <dougsz> +1 for agent instance, the API might not have network access to RabbitMQ
15:11:04 <witek> I think it should be a seperate service running next to the agent though
15:11:24 <dougsz> Ah yeah sure, that makes sense to me
15:11:56 <dougsz> Good comments about the forwarder not being suitable on the spec
15:13:02 <witek> I think the new service could be similar ceilometer-agent
15:13:14 <witek> similar to ceilometer-agent
15:13:56 <witek> joadavis: what do you think?
15:14:20 <joadavis> as a separate service/agent, we should be able to keep it pretty light.
15:14:33 <witek> right
15:14:35 <joadavis> my only worry is that it will be another thing to install.
15:14:51 <joadavis> but we can try to find a way to make it easy to include with a monasca-agent install
15:15:05 <witek> true
15:15:18 <witek> we could place the code next to collector and forwarder
15:16:11 <dougsz> agreed that in an ideal work it would be nice if it was installed with the monasca-agent
15:16:38 <dougsz> All the code for fetching the endpoint, and the config file is there as well.
15:17:36 <witek> that could work, I guess
15:19:22 <witek> OK, let's put it in monasca-agent repo then and extend the existing config and monasca_setup script
15:19:23 <joadavis> should be good to have with monasca-agent.  I'll have to do a little digging to see what options we have there
15:20:27 <dougsz> Sounds good to me :)
15:21:27 <witek> and in case we want to collect metrics in a different way, we can still stop/disable collector forwarder services
15:22:05 <dougsz> My first thought is a forwarder which buffered to disk for events could help avoid rabbit filling the RAM if the notifications pile up, but I think that is probably overkill at the moment.
15:22:49 <witek> I think RabbitMQ can handle it better anyway
15:23:24 <joadavis> Would RabbitMQ hang on to the notifications indefinitely?  I don't think that other services that consume the notifications have that concern currently.
15:23:26 <dougsz> Can it drop notifications if the RAM fills up or do you mean a separate instance for notifications?
15:23:36 <joadavis> we can check against how Vitrage or others handle it
15:24:01 <witek> joadavis: that would be good
15:24:23 <dougsz> joadavis: good question
15:24:38 <joadavis> I would hope it could drop old notifications if they aren't consumed, but I think it depends on the type of listener you are using with Rabbit
15:25:52 <dougsz> Just thinking about if the events API went down, how we would avoid RAM filling up.
15:26:06 <joadavis> Events should be a lower amount of traffic. so either our monasca component listening to events would have to be down for a long time or there would have to be a large storm of events while the listener was down to run in to space issues
15:27:03 <joadavis> we could do some buffering in our events listener agent and drop old messages as needed
15:27:41 <joadavis> similar to what the forwarder does when it requeues messages when the api is down
15:28:20 <witek> let's check it, and we can then build in such logic, if needed
15:28:39 <dougsz> Sounds like a plan to me. Hopefully the concern is unfounded.
15:28:44 <joadavis> :)
15:28:49 <witek> :)
15:29:18 <witek> cool, seems that we have a clearer view how the component should look like now
15:29:43 <dougsz> Looking forward to deploying it :)
15:30:16 <joadavis> I'll rewrite the spec to match. :)
15:30:25 <witek> great, thanks joadavis
15:30:37 <dougsz> thanks joadavis
15:30:46 <witek> hope to merge it before Christmas :)
15:31:13 <dougsz> yep
15:31:39 <witek> #topic additional meeting time for Asia/Australia
15:32:01 <witek> I have attended the self-healing SIG last week
15:32:16 <witek> they host two team meetings for different time zones
15:32:38 <witek> basically, everyone can choose, which one to attend
15:33:19 <witek> the current meeting time is very inconvenient for Japan/China/Australia
15:34:05 <dougsz> I think it would be good to try - if we have lots of people turn up for the new meeting, that could only be a good thing.
15:34:44 <joadavis> might be worth it even if we just got 2 new contributors.  Have you had any interest in the past?
15:35:23 <witek> yes, people Tacker project for example
15:35:34 <witek> India/Japan
15:36:02 <witek> I see some reviews from China
15:36:26 <witek> I think we can give a try
15:36:27 <joadavis> do you have a time slot in mind?
15:36:47 <witek> though about Thursday morning UTC time
15:37:17 <witek> middle of the night for you :)
15:37:44 <joadavis> I can read the logs later. :)
15:38:00 <witek> no, I really don't expect anyone apart from me to attend twice
15:39:15 <witek> OK, I'll set up the slot and send an email to the list
15:39:54 <dougsz> The time is good me, so I can try and log in at least
15:40:07 <witek> thanks dougsz
15:40:12 <witek> #topic AOB
15:40:52 <witek> tempest tests are currently blocked on influxdb client error
15:40:56 <witek> https://review.openstack.org/624334
15:41:23 <witek> I hope requirements team will merge it soon
15:42:23 <witek> I also have problems installing DB schema with Alembic on Python 3
15:42:32 <witek> https://review.openstack.org/622361
15:42:49 <witek> the tests fail randomly
15:43:21 <witek> with pymysql.err.IntegrityError: (1215, 'Cannot add foreign key constraint')
15:44:11 <dougsz> Has anyone else reproduced Adrian's error in devstack?
15:44:56 <witek> yes, you can observe it in OpenStack CI
15:45:32 <witek> no, sorry
15:45:56 <witek> bug reported by Adrian was fixed in your change
15:47:29 <witek> any other topics?
15:48:08 <dougsz> I gathered more feedback on cells deployments
15:48:16 <witek> OK, please share
15:48:26 <joadavis> I had a bug passed to me yesterday related to the monasca-installer. I'm going to ask Dobek about it, but I'll cc you witek
15:48:39 <dougsz> Nothing from Cern yet, but Rackspace were using Yaggi to collect notifications to a central repo.
15:49:30 <witek> https://github.com/rackerlabs/yagi
15:49:41 <joadavis> I would think most cells users would want to centralize the notifications. but if the documented architecture doesn't require it we might have to support them being separate
15:50:08 <dougsz> I suppose now that we have decided to post events to the API, it doesn't matter if you leave them in a cell, or tell oslo to ship them to a central repo.
15:51:55 <witek> last commit 2 year ago
15:52:07 <dougsz> Agreed joadavis, best practice seems to be to use a central notificiation repo
15:53:00 <dougsz> yeah - it is abandoned - it seems just configuring oslo is the simplest way of collecting them all in one place.
15:53:23 <witek> thanks dougsz
15:53:30 <joadavis> +1
15:53:46 <dougsz> np - only one data point so far :)
15:54:23 <witek> OK, thanks everyone for today
15:54:35 <witek> thanks for constructive discussion
15:54:39 <koji_n> witek: when will the meeting for asian time zone start?
15:55:03 <dougsz> thanks all
15:55:09 <witek> koji_n: I'll send a message to the openstack-discuss mailing list
15:55:13 <witek> next week?
15:55:21 <koji_n> ok, thx!
15:55:37 <witek> thanks, bye bye
15:55:57 <witek> #endmeeting