20:00:12 #startmeeting Octavia 20:00:13 Meeting started Wed Jun 21 20:00:12 2017 UTC and is due to finish in 60 minutes. The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:17 The meeting name has been set to 'octavia' 20:00:23 Hi folks! 20:00:42 o/ 20:01:20 hello 20:01:22 o/ 20:01:49 #topic Announcements 20:02:13 Just my regular reminder, feature freeze is coming: 20:02:14 We are heading towards feature freeze Pike-3 July 24th 20:02:40 we should probably start an etherpad for the PTG as well to gauge interest 20:02:43 So please keep that in mind for things you want to land in Pike 20:03:02 We can, that is September, so it might be a bit early.... 20:03:13 We will have a room dedicated for three days 20:03:25 I have reserved that already 20:03:38 NICE 20:03:54 Next PTG is September 11-15 in Denver 20:04:00 #link https://www.openstack.org/ptg#tab_schedule 20:05:05 #link https://etherpad.openstack.org/p/PTG-Queens-Octavia 20:05:29 Ok, there we go 20:06:06 +! 20:06:21 Ok, other announcements? 20:06:48 There is talk on the mailing list about the "big tent" and what we call "official" projects 20:07:15 Also there is an interesting thread on Trove which is proposing a complete re architecture (maybe even new name) 20:07:44 #topic Brief progress reports / bugs needing review 20:07:58 We still have two specs up for review: 20:08:05 #link https://review.openstack.org/453005 20:08:13 l3-active-active spec 20:08:14 and 20:08:19 #link https://review.openstack.org/392485 20:08:26 flavors spec 20:08:32 Please review and comment on those 20:08:52 for the l3 active/active I can update to use amphora lifecycle for distributor functions 20:08:59 if that is the direction we want to go 20:09:42 As for my progress report, I wrote up RBAC policy enforcement for the Octavia v2 API. It has started to merge, but there are still parts up for review. I plan to also update the open API patches from JudeC. 20:09:45 I think we would like to do that. I also ended up promising to refactor the servcie vm part of the IBM patch 20:10:19 I also fixed gate issues in ioenstack-ansible - now need to get them to merge it 20:10:31 Yeah, I think that makes sense for the VM based distributor, but I also think we need to make sure what is there doesn't *require* the VM if it's not necessary for the driver 20:11:00 yes that makes sense as an implementation might need 0 VMs 20:11:17 +1 20:11:18 Right 20:11:35 So, please, if you have time, dig into the Act/Act patches 20:12:25 The distributor VMs there are basically amphora, so no point in not leveraging the lifecycle stuff we have in place for them and just extending the amp-agent api to support a new "type" of amphora 20:13:38 o/ 20:13:42 (sorry to be late) 20:14:10 +1 I throw some patch over the wall: https://review.openstack.org/#/c/313006/ 20:14:33 in particular https://review.openstack.org/#/c/313006/81/doc/source/api/haproxy-amphora-api.rst 20:14:39 reedip and others, interested to hear the state of the qos patch. I think it needs some clean up but is close to being ready for review? 20:15:21 redip is in India so likely to late for him 20:15:21 +1 xgerman_ 20:15:46 It would be nice to get that one in for Pike. Yeah, but sometimes he is around, so thought I would ping 20:16:03 Ok, any other progress reports from folks? 20:16:11 Some of us have been working on gate issues too. 20:16:12 He is usually actibve after 10 pm my time 20:17:09 Ubuntu had bad cloud images for a bit, diskimage-builder had a bug with partitioning (fixed in master, not yet released so we may still see it in lbaas gates), and we have been working on the 404 issue seen in the gates. 20:17:40 We have seen one case where the VIP interface never shows up under Linux (or at least after minutes of waiting.) 20:17:49 So, smashing some bugs. 20:18:17 Ok, moving on 20:18:37 #topic Discuss moving the meeting time 20:18:44 #link http://lists.openstack.org/pipermail/openstack-dev/2017-June/118363.html 20:19:00 This got lost in my inbox (gmail and outlook are fighting) 20:19:17 huh, yeah same 20:19:18 #vote… 20:19:29 didn’t have that for a while ;-) 20:19:30 that's probably fine by me, actually might be easier for me to attend 20:19:37 and gets me out of my standup once a week :P 20:19:57 i guess we should officially vote on the ML, but... I'm +1 20:20:11 one less thing to split my afternoon into bits 20:20:13 I was going to do one of those online time vote things, but a vote here should work ok. I'm pretty sure it's a better time for the folks that can't regularly make this meeting 20:20:18 sec.. I'll need to convert this to my local timezone 20:20:31 k 20:20:36 Ok, let me send out an e-mail with the online vote thing. 20:20:37 3 hours earlier than now 20:20:40 nmagnezi: 20:20:43 Just to give time to look at calendars, etc. 20:20:47 well, 3 hours 20 min 20:21:54 for me it's actually much worse than our current time :< 20:22:29 nmagnezi reply to the e-mail with a new proposal and we can hold the vote next week 20:22:42 xgerman_, aye 20:23:17 #link https://doodle.com/poll/kxvii2tn9rydp6ed 20:23:18 nmagnezi: ah when is that for you 20:23:20 Ok, there we go 20:23:35 i would think earlier is better as it's now your night? or does it put the meeting during your dinner time? >_> 20:23:38 rm_work, 20:00 20:24:35 johnsom: can we make that poll possible to add more times? :/ 20:24:43 so people could propose times 20:24:46 i dunno if that's a thing 20:24:51 Yeah, If you don't see those options, send them to me and I will add 20:24:57 let them propose on the ML 20:25:17 k 20:25:20 i mean *I* don't have suggestions, but others might 20:25:34 Yeah, please make sure that you find an open IRC meeting room for the timeslots proposed 20:26:08 Ok, so more discussion on the mailing list. 20:26:23 #topic Open Discussion 20:26:28 Other topics for today? 20:27:05 should we start some priority list of patches again? 20:27:36 after all P-3 is near and we should make it easy to focus 20:27:39 Yeah, probably. Maybe for next week. 20:28:12 o/ 20:28:32 is it a good time to talk about flavor or do we want to have that conversation on gerrit 20:28:33 yep 20:28:43 we should definitely have a priority list 20:29:02 cpuga it’s a good time 20:29:17 i find myself mentioning to multiple people which patches need to be looked at, which usually means a central place for this would be good 20:29:19 can we put a priority list etherpad in our channel topic? 20:29:41 Yeah, I can do that 20:29:52 +10 20:30:57 sorry got distracted, umm was there anything in particular that we don't feel comfortable with the design 20:31:17 it was metioned that it perhaps could be simplified 20:31:38 yes, I didn’t like all the key-value tables 20:32:03 Hmm, well, I will have to work on that. I should be anointed, but it's giving me issues. 20:32:14 which part in particular the capability list? 20:32:30 yep, and the values we add to it 20:32:53 but I am no authority I just don’t like to do 100s of curl posts 20:33:59 the flavor profile can be created with one post 20:34:30 but then I need to set up those key-values for the “flags” 20:35:20 rm_work sinc eyou are actually have “real” users would that ne helpful for them to query the capabilities? 20:35:47 so the idea currently is that upon the installation of the provider driver the operator would add the providers supported key/values pairs 20:36:26 a provider might not express capabilities with key/values. They might express with key and json data 20:36:33 or just json data 20:36:45 xgerman_ would you be okay if I would drop those two table and just keep the key/values pair on the flavor_metadata table? 20:37:50 sure if I can set them all with one json post ;-) 20:39:07 Yeah, I like the json blob idea better myself. I don't think we want operators in the database. 20:39:21 It's likely to lead to type issues, etc. 20:39:47 the operator wouldn't directly be in the database 20:39:55 it would be via rest calls 20:43:18 I'm flexible with the design, I'd really would just like to get a feel for the teams preference so that it can move forward. 20:44:43 I need to take another pass over it and comment on the patch. 20:45:12 johnsom: would it be fine to request a webex, after you guys take a look at the spec? 20:45:38 Yeah, as long as it is open to the whole community 20:45:47 +! 20:45:49 I.e. the octavia team 20:45:57 yes definitely 20:46:31 I think it would make sense to add version field to provider and flavor profile 20:46:40 Yeah, I recommend sending something out to the dev mailing list with [octavia] tag in the subject 20:47:30 +1 on that and maybe we can include active/active L3 as well 20:48:14 or I can send separate for that 20:48:32 Oye, we might want two... Either one of those could be big topics 20:48:42 +1 20:49:41 Ok, sounds like a plan 20:50:00 k, thx 20:50:29 Other topics for today? 20:51:07 yes 20:51:16 please have a look at https://bugs.launchpad.net/octavia/+bug/1698654 20:51:17 Launchpad bug 1698654 in octavia "API2.0: Loadbalancer create won't accept provider" [Undecided,New] 20:51:25 saw this while I tested a patch by rm_work 20:51:36 that's all :) 20:51:47 yeah 20:51:52 there's ... not really a provider 20:51:53 Ah, yeah, we haven't implemented that yet.... 20:52:04 The provider stuff is just stubbed out afaik. 20:52:09 Correct 20:52:14 so.. maybe we should upfate the api-ref for now 20:52:26 update* 20:52:44 When I wrote that I expected it to be there, but we lost a bunch of people, so things have changed 20:52:58 understandable 20:53:04 flavor is on that list too, which is what we were just talking about. 20:53:10 i saw it in the loadbalancer create example 20:53:31 dunno if it shows in additional examples.. I imagine it is 20:53:51 Technically it is part of the API, it is just a bug that it is not implemented. 20:54:11 yeah but I think flavors are not causing any errors for now so we're good on that 20:54:15 yeah, we probably should get to it when we actually have providers IHMO 20:54:17 #link https://bugs.launchpad.net/octavia/+bug/1655768 20:54:18 Launchpad bug 1655768 in octavia "Need to enable "provider" support to the octavia v2 api" [High,New] 20:54:29 so maybe a temporary comment in the api-ref 20:55:21 We can probably make it respond better via the API 20:55:33 I will look at that 20:56:53 nmagnezi I'm going to mark your bug a duplicate of that one we already had open for it. 20:57:00 np 20:57:23 Ugh, note to self, I need to go clean out bugs too 20:57:41 We have closed some of these but didn't tag them 20:58:02 Ok, other topics? 20:58:08 nothing on my end 20:59:03 Ok, with two minutes left, thanks folks! 20:59:16 #endmeeting