00:00:42 #startmeeting congressteammeeting 00:00:42 Meeting started Thu Sep 7 00:00:42 2017 UTC and is due to finish in 60 minutes. The chair is ekcs. Information about MeetBot at http://wiki.debian.org/MeetBot. 00:00:43 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 00:00:46 The meeting name has been set to 'congressteammeeting' 00:00:49 Hi all 00:01:04 Hi all. Welcome back to congress meeting. 00:01:17 as usual topics are kept here: #link https://etherpad.openstack.org/p/congress-meeting-topics 00:01:25 pls feel free to read/comment/add 00:05:20 ok let’s dive in then. 00:06:11 quick announcement: I’m joining a webinar today/tomorrow on openstack pike updates to talk a little bit about Congress. Here’s a link: https://content.mirantis.com/Webinar-OpenStack-Pike_Landing-Page.html 00:07:17 Not many items of business today. 00:07:20 #topic PTG 00:09:26 We’re still firming up the agenda. but we have topics around Congress use cases and direction, in particular the fault management / self-healing use cases. 00:10:02 Expecting a special session with other projects on how the difference services fit together for fault management. 00:10:12 Anything to talk about here for PTG? 00:10:47 Sounds good to me 00:11:56 Ok moving on then =) 00:12:50 #topic meeting time 00:13:12 At the end of a cycle it’s a good time to think about whether our current meeting time is working for us and whether we want to change it. 00:13:29 But since most of us aren’t here today, maybe we’ll table the topic 00:13:44 I can initiate it over email. 00:13:59 Sounds good to me 00:14:00 thinrichs: do you have any thoughts about meeting time? 00:14:20 Historically it's been hard getting something that works for ramineni, masahito, and anyone in the US. 00:14:26 Timezone skew is large 00:14:37 There's never been a good time to do it 00:15:04 But certainly it makes sense to revisit on the ML and see if anyone's schedule has changed 00:16:23 thinrichs: got it. great we’ll do that then. 00:16:40 #topic open discussion 00:16:52 thinrichs: anything else we want to talk about today? 00:17:30 Not from me. Any reviews that need my attention? 00:18:46 Nothing specifically requiring your attention. The most substantive thing around is songming’s qos patch ready for review #link https://review.openstack.org/#/c/488992/ 00:19:07 That breaks the problem around QoS into 2 drivers, right? 00:19:32 along with tempest testing all done. so it’d be good to get that in. 00:19:33 riht. 00:19:37 right 00:21:00 The process of that driver along with the experience working with the magnum and designate drivers (incomplete because of tempest tests) 00:21:31 showed me just how difficult it is to do tempest tests 00:22:46 mostly around figuring out how to interact with each new service. the bulk of the total work end up in dealing with tempest tests. and a huge hurdle for newcomers. 00:23:02 I wish I knew a good solution but I don’t. 00:23:43 Cross-system tests are always hard b/c you need to understand both ends. At least it's easy to get the non-Congress service running with Tempest. 00:25:35 thinrichs: right. It makes me wonder whether insisting that we get the tempets tests done along with new drivers is the right way to go. it seems necessary because otherwise we end up with broken drivers all the time. but also a huge hurdle. anyway…. 00:26:47 I don't see a good solution either. 00:26:52 Looking through the reviews... 00:27:04 Do we know why these DSENode tests are flaky? 00:27:05 https://review.openstack.org/#/c/498996/1 00:27:28 Those look to be pretty core. Is it just timeouts? 00:28:11 I don’t know yet. ramineni filed a bug on it and is looking into it. 00:28:19 yes it looks pretty core. 00:28:37 if it’s something fundamentally unstable about the in-mem transport, then we have a serious problem. 00:29:18 on the other hand, I don’t remember ever noticing it affecting something else other than these tests. 00:29:28 If all the other tests are working though it'd be weird if there was something really wrong. 00:29:56 right that’s why I’m not quite worried. but we need to understand what’s going on. 00:30:56 hi, sorry late 00:31:03 hi masahito ! 00:31:36 we’re just in open discussion now. 00:32:23 Another topic is around the place of congress in a larger fault-management/self-healing infra. 00:33:04 so far all the examples I have found have fairly simple conditions for triggering action. 00:34:21 which may not justify a place for the flexible policy language of congress because they can be done directly in monitoring services like monasca or vitrage. 00:35:06 So I’m looking for examples of the need for more complex conditions that span data sources and services. to better understand the place for congress. 00:35:34 definitely love to hear/see any thoughts or references you may have. 00:36:42 one of the moderators in the discussion is my colleague. 00:37:07 masahito: right. Sampath? 00:37:40 so if there is any question that you want to have an answer, I can ask it of him. 00:37:42 right 00:38:49 got it. 00:43:33 I’m looking to understand whether/where there is a need for complex multi-source/service conditions for fault management actions. alarming services can set certain conditions. workflow services can take proper actions based on conditions too. so one can connect the alarming services directly to the workflow services. I think there is a benefit to having a policy layer in between that consumes the alarms and other data before triggering the workflows. 00:43:43 but we need to understand and make the case. 00:46:42 anyway do hit me up if you have any more thoughts or suggestions later. 00:46:51 anything else we want to talk about? 00:46:51 got it. 00:49:11 ok unless there is something else let’s call it a day. 00:49:32 next week is PTG so let’s cancel the IRC meeting. 00:49:38 Sounds good 00:50:05 I’ll announce on ML. 00:50:29 Ok see you all next time then! have a great week 00:50:41 see you 00:51:15 bye! 00:51:19 #endmeeting