20:00:03 #startmeeting Octavia 20:00:07 Meeting started Wed Feb 8 20:00:03 2017 UTC and is due to finish in 60 minutes. The chair is johnsom-alt. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:10 The meeting name has been set to 'octavia' 20:00:15 Hi folks. 20:01:30 #topic Announcements 20:01:55 You are stuck with me for another release cycle 20:02:05 #link http://lists.openstack.org/pipermail/openstack-dev/2017-February/111769.html 20:02:20 o/ 20:02:35 PTL elections are complete 20:02:53 See the link above for the list of the new (or old and new again) PTLs 20:03:22 Also don't forget we have a PTG etherpad for topics to cover at the PTG 20:03:30 #link https://etherpad.openstack.org/p/octavia-ptg-pike 20:04:09 Any other announcements today? 20:04:52 #topic Brief progress reports / bugs needing review 20:05:35 On going work on the Octavia v2 API 20:05:42 Same with Active/Active 20:05:50 #link https://etherpad.openstack.org/p/Active-_Active_Topology_commits 20:06:04 That is the review etherpad for Active/Active. 20:06:24 rm_work and I have been working on py3x clean up. 20:06:36 o/ 20:06:46 We merged the first part of that work which is getting our py3x gates going. 20:07:01 * rm_work watches johnsom-alt have a meeting by himself 20:07:24 I have a patch ready to enable py3x functional tests as well, so you will see that soon 20:07:51 which SHOULD pass! :) 20:07:52 There are a few people here... (I hope) 20:08:01 o/ :) 20:08:11 ha 20:08:43 (irccloud is having an opps day, so I'm using an alternate client/nick) 20:08:53 o/ 20:09:00 sorry to be late 20:09:21 Any other progress reports? 20:10:07 #topic Discuss VIP on the same network as lb-mgmt-net 20:10:12 i have a question about the API patches in general. if it does not fit here i'll wait 20:10:20 #link https://bugs.launchpad.net/octavia/+bug/1659488 20:10:20 Launchpad bug 1659488 in octavia "Octavia is not handling VIPs on the same subnet as the lb-mgmt-net" [Medium,In progress] - Assigned to zhaobo (zhaobo6) 20:10:24 okay I guess I'll wait :) 20:10:41 nmagnezi Let's talk about that in open discussion 20:10:53 johnsom-alt, np 20:11:38 Currently we have an issue if the user specifies the lb-mgmt-subnet as the VIP subnet. 20:11:59 I see two paths forward to make this a better user experience 20:12:44 currently we allow it, but we drop connectivity to the amphora-agent because the VIP process reconfigures the port/security groups 20:13:16 I think we can either block the user from using the lb-mgmt-subnet for a VIP 20:13:47 or we work on enabling the lb-mgmt-subnet to work inside the network namespace. 20:14:15 Do you folks have any comments/thoughts around this? 20:14:18 is there a use case for having VIP on mgmt net? 20:14:30 BTW, there is already a patch proposed to just block it. 20:15:15 The last user I saw doing this was trying to setup a flat network PoC 20:15:28 ok, so no 20:15:43 If it is blocked, couldn't they just create a new network that could reach the mgmt anyways? 20:16:03 Ah 20:16:06 I AM doing that 20:16:10 and i have local patches to make it work already 20:16:20 Yes, the way folks normally set this up is with a dedicated lb-mgmt-subnet in neutron 20:16:29 jniesz: now always possible 20:16:30 *not always 20:17:15 rm_work Can you comment on that bug and the linked patch? 20:17:29 yes, my fix was VERY simple 20:17:38 possibly too simple? but it works... or maybe it's just right :P 20:17:41 rm_work How did you resolve it? 20:18:10 looking for the patched file now 20:18:29 picking it out of my other patches 20:18:51 Have you posted it or is it just local? 20:19:22 just local 20:19:28 Ok 20:19:37 i'll comment with it later 20:19:44 but, I would VERY VERY like to see it upstream 20:19:45 Ok, thanks 20:20:00 so please let's not merge a patch that specifically blocks it :) 20:20:09 I was leaning toward "make it work right" so happy to hear you have a solution 20:20:29 s/ right// 20:20:37 maybe 20:20:55 Ok, happy to hear you are motivated to "make it work" Grin 20:21:11 #topic I18n liaison request 20:21:33 Our friends on the I18n team have been working to localize our dashboard 20:21:40 yeah! 20:21:45 We have a number of languages supported now, so good stuff. 20:22:13 They are inquiring if we can have a liaison join their meetings and help coordinate the effort. 20:22:30 Is anyone interested in playing that role? 20:22:38 #link https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n 20:22:50 That link has the expectations 20:23:05 FYI, I am still acting as our oslo liaison 20:23:51 Crickets 20:24:00 Ha, I expected that. 20:24:18 I will add that to my list of meetings to attend 20:24:33 #topic Open Discussion 20:24:47 I missed to step forward? 20:25:09 nmagnezi You had a question 20:25:14 aye 20:25:21 so.. I actually already asked rm_work about this, but just wanted to hear more opinions (and also ask a follow-up question). what is the best way to cherry-pick and use the API patches? say I want to actually see this code in action, how to trigger it? we don't have a python client to work directly against Octavia obviously. 20:25:27 xgerman Were you volunteering for the I18n liaison? 20:25:38 The follow-up question is, in what order to cherry-pick those patches? 20:25:45 sure, I can do that 20:26:18 #agreed xgerman will be our I18n liaison 20:26:22 nmagnezi I am pretty close with the proxy… I can accelerate that 20:26:23 Thank you xgerman 20:26:30 are the api patches all in a chain? I forgot to check 20:26:31 if not, I would recommend we fix it so they are 20:26:53 +1 20:26:55 rm_work, yup that would probably make it more easy to review. 20:26:57 Yes, they are in a patch chain 20:26:59 ok 20:27:07 then you should be able to just grab the tail 20:27:10 nmagnezi, Right now the patches are in a dependent chain to help solve functional tests. To use them, probably using the last will pick the others as well. 20:27:13 *checkout*, not cherry-pick 20:27:14 So, you can checkout the last patch in the chain that you are interested in 20:27:23 xgerman, can you please link the patch so I'll keep track ? 20:27:40 rm_work, checkout, correct. sorry. 20:27:55 I setup a devstack, usually with master 20:28:13 nmagnezi, This is the tail: https://review.openstack.org/#/c/406328/ 20:28:15 https://review.openstack.org/#/c/418530/ 20:28:16 Then, go into /opt/stack/octavia and checkout the last patch in the chain I am interested in. 20:28:43 Then I do a "python(sometimes 3) setup.py install" as root 20:29:01 Then, I use screen -r to restart the o-* services impacted. 20:29:26 johnsom-alt, aye, but how can I send the api call directly to octavia? 20:29:31 I then use curl to do testing 20:29:40 #link https://gist.github.com/sbalukoff/e6cd600b4a12ee582f5e 20:29:43 nice 20:29:43 no creds or anything? 20:29:48 That is a list of examples for the old v1 API 20:30:00 that's for octavia 20:30:07 You will need to adjust them for the new v2 API 20:30:28 No creds are needed if you have "noauth" in your octavia.conf 20:30:38 for octavia I use: https://gist.github.com/rm-you/e1c2bf33aa570e310b1cdc7ebdd5dc2e 20:30:44 modified to use auth 20:31:09 thank you for the examples guys, they look really useful 20:31:12 :) 20:31:48 You can also use the postman extension for chrome and build up a library of REST calls. 20:32:05 As an alternative to using curl 20:32:05 but yeah, those aren't for neutron-lbaas, I'd need to look at modifying them for that 20:32:55 #link https://www.getpostman.com/ 20:33:02 For those interested in the postman option 20:33:02 oh, off that topic: I'm working on a networking driver to allow for using FLIPs internally instead of AAP to handle failovers, for the case where you can't guarantee all amps will be on the same L2 net -- if this is of interest to anyone, let me know 20:33:03 I think curl is fine, the examples you pasted can get me started with this 20:33:45 nmagnezi Great 20:34:05 jniesz I think what rm_work is talking about might interest you as well 20:35:23 on our new networking design, we are looking to do a pure L3 design 20:36:39 so depending what rack an amphora lands, it would be a different L2 segment 20:36:40 I'm interested in how you're aiming to accomplish that 20:36:58 We have everything in L3s as well 20:37:00 +1 20:37:39 Basically the same issue 20:37:42 was thinking of using anycast with ECMP, and then the distributor would be something like Quagga that updates bgp routes 20:37:46 for teh amphoras 20:37:54 OK, so you're using a distributor 20:38:33 jniesz So you are looking at the Active/Active topology more than the others? 20:39:01 the distributor would not be in data plane 20:39:08 just used to update routes to our underlay 20:39:26 yes, looking on modeling this and getting a PoC 20:39:41 were you planning on putting it upstream? 20:40:07 yes, definitely something we would wan to get upstreamed 20:40:09 this seems like a similar problem that we are solving, so it might be possible to collaborate 20:40:16 right now we are in the very early stages of modeling it out 20:40:30 ok 20:40:41 but definitely would like to work with you on the design 20:40:48 it seems to solve a lot of issues 20:40:59 plus it supports IPv6 20:41:05 Cool. If we can help, please let us know. We can also have brainstorming meetings and/or etherpads to collaborate with 20:41:32 +1 20:41:34 OK. If you have any preliminary documents about the design I would love to take a look with our networking guys and see if this approach would work for us as well 20:42:33 I can put together some docs, for a follow up discussion 20:42:46 awesome 20:43:00 +1 20:44:21 Side topic, I forgot to mention I am also working on a patch for how we handle project_id through the API. It should fix some issues out there with delete and the quota patch. 20:44:44 #link https://bugs.launchpad.net/octavia/+bug/1624145 20:44:44 Launchpad bug 1624145 in octavia "Octavia should ignore project_id on API create commands (except load_balancer)" [High,New] - Assigned to Michael Johnson (johnsom) 20:44:59 Any other questions/topics? 20:46:12 Ok, thanks for joining today and all of your great work on Octavia! 20:46:25 thanks 20:46:29 o/ 20:46:34 #endmeeting