20:00:03 <johnsom-alt> #startmeeting Octavia
20:00:07 <openstack> Meeting started Wed Feb  8 20:00:03 2017 UTC and is due to finish in 60 minutes.  The chair is johnsom-alt. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:10 <openstack> The meeting name has been set to 'octavia'
20:00:15 <johnsom-alt> Hi folks.
20:01:30 <johnsom-alt> #topic Announcements
20:01:55 <johnsom-alt> You are stuck with me for another release cycle
20:02:05 <johnsom-alt> #link http://lists.openstack.org/pipermail/openstack-dev/2017-February/111769.html
20:02:20 <xgerman> o/
20:02:35 <johnsom-alt> PTL elections are complete
20:02:53 <johnsom-alt> See the link above for the list of the new (or old and new again) PTLs
20:03:22 <johnsom-alt> Also don't forget we have a PTG etherpad for topics to cover at the PTG
20:03:30 <johnsom-alt> #link https://etherpad.openstack.org/p/octavia-ptg-pike
20:04:09 <johnsom-alt> Any other announcements today?
20:04:52 <johnsom-alt> #topic Brief progress reports / bugs needing review
20:05:35 <johnsom-alt> On going work on the Octavia v2 API
20:05:42 <johnsom-alt> Same with Active/Active
20:05:50 <johnsom-alt> #link https://etherpad.openstack.org/p/Active-_Active_Topology_commits
20:06:04 <johnsom-alt> That is the review etherpad for Active/Active.
20:06:24 <johnsom-alt> rm_work and I have been working on py3x clean up.
20:06:36 <rm_work> o/
20:06:46 <johnsom-alt> We merged the first part of that work which is getting our py3x gates going.
20:07:01 * rm_work watches johnsom-alt have a meeting by himself
20:07:24 <johnsom-alt> I have a patch ready to enable py3x functional tests as well, so you will see that soon
20:07:51 <rm_work> which SHOULD pass! :)
20:07:52 <johnsom-alt> There are a few people here... (I hope)
20:08:01 <sshank> o/ :)
20:08:11 <xgerman> ha
20:08:43 <johnsom-alt> (irccloud is having an opps day, so I'm using an alternate client/nick)
20:08:53 <nmagnezi> o/
20:09:00 <nmagnezi> sorry to be late
20:09:21 <johnsom-alt> Any other progress reports?
20:10:07 <johnsom-alt> #topic Discuss VIP on the same network as lb-mgmt-net
20:10:12 <nmagnezi> i have a question about the API patches in general. if it does not fit here i'll wait
20:10:20 <johnsom-alt> #link https://bugs.launchpad.net/octavia/+bug/1659488
20:10:20 <openstack> Launchpad bug 1659488 in octavia "Octavia is not handling VIPs on the same subnet as the lb-mgmt-net" [Medium,In progress] - Assigned to zhaobo (zhaobo6)
20:10:24 <nmagnezi> okay I guess I'll wait :)
20:10:41 <johnsom-alt> nmagnezi Let's talk about that in open discussion
20:10:53 <nmagnezi> johnsom-alt, np
20:11:38 <johnsom-alt> Currently we have an issue if the user specifies the lb-mgmt-subnet as the VIP subnet.
20:11:59 <johnsom-alt> I see two paths forward to make this a better user experience
20:12:44 <johnsom-alt> currently we allow it, but we drop connectivity to the amphora-agent because the VIP process reconfigures the port/security groups
20:13:16 <johnsom-alt> I think we can either block the user from using the lb-mgmt-subnet for a VIP
20:13:47 <johnsom-alt> or we work on enabling the lb-mgmt-subnet to work inside the network namespace.
20:14:15 <johnsom-alt> Do you folks have any comments/thoughts around this?
20:14:18 <xgerman> is there a use case for having VIP on mgmt net?
20:14:30 <johnsom-alt> BTW, there is already a patch proposed to just block it.
20:15:15 <johnsom-alt> The last user I saw doing this was trying to setup a flat network PoC
20:15:28 <xgerman> ok, so no
20:15:43 <jniesz> If it is blocked, couldn't they just create a new network that could reach the mgmt anyways?
20:16:03 <rm_work> Ah
20:16:06 <rm_work> I AM doing that
20:16:10 <rm_work> and i have local patches to make it work already
20:16:20 <johnsom-alt> Yes, the way folks normally set this up is with a dedicated lb-mgmt-subnet in neutron
20:16:29 <rm_work> jniesz: now always possible
20:16:30 <rm_work> *not always
20:17:15 <johnsom-alt> rm_work Can you comment on that bug and the linked patch?
20:17:29 <rm_work> yes, my fix was VERY simple
20:17:38 <rm_work> possibly too simple? but it works... or maybe it's just right :P
20:17:41 <johnsom-alt> rm_work How did you resolve it?
20:18:10 <rm_work> looking for the patched file now
20:18:29 <rm_work> picking it out of my other patches
20:18:51 <johnsom-alt> Have you posted it or is it just local?
20:19:22 <rm_work> just local
20:19:28 <johnsom-alt> Ok
20:19:37 <rm_work> i'll comment with it later
20:19:44 <rm_work> but, I would VERY VERY like to see it upstream
20:19:45 <johnsom-alt> Ok, thanks
20:20:00 <rm_work> so please let's not merge a patch that specifically blocks it :)
20:20:09 <johnsom-alt> I was leaning toward "make it work right" so happy to hear you have a solution
20:20:29 <rm_work> s/ right//
20:20:37 <rm_work> maybe
20:20:55 <johnsom-alt> Ok, happy to hear you are motivated to "make it work"  Grin
20:21:11 <johnsom-alt> #topic I18n liaison request
20:21:33 <johnsom-alt> Our friends on the I18n team have been working to localize our dashboard
20:21:40 <xgerman> yeah!
20:21:45 <johnsom-alt> We have a number of languages supported now, so good stuff.
20:22:13 <johnsom-alt> They are inquiring if we can have a liaison join their meetings and help coordinate the effort.
20:22:30 <johnsom-alt> Is anyone interested in playing that role?
20:22:38 <johnsom-alt> #link https://wiki.openstack.org/wiki/CrossProjectLiaisons#I18n
20:22:50 <johnsom-alt> That link has the expectations
20:23:05 <johnsom-alt> FYI, I am still acting as our oslo liaison
20:23:51 <johnsom-alt> Crickets
20:24:00 <johnsom-alt> Ha, I expected that.
20:24:18 <johnsom-alt> I will add that to my list of meetings to attend
20:24:33 <johnsom-alt> #topic Open Discussion
20:24:47 <xgerman> I missed to step forward?
20:25:09 <johnsom-alt> nmagnezi You had a question
20:25:14 <nmagnezi> aye
20:25:21 <nmagnezi> so.. I actually already asked rm_work about this, but just wanted to hear more opinions (and also ask a follow-up question). what is the best way to cherry-pick and use the API patches? say I want to actually see this code in action, how to trigger it? we don't have a python client to work directly against Octavia obviously.
20:25:27 <johnsom-alt> xgerman Were you volunteering for the I18n liaison?
20:25:38 <nmagnezi> The follow-up question is, in what order to cherry-pick those patches?
20:25:45 <xgerman> sure, I can do that
20:26:18 <johnsom-alt> #agreed xgerman will be our I18n liaison
20:26:22 <xgerman> nmagnezi I am pretty close with the proxy… I can accelerate that
20:26:23 <johnsom-alt> Thank you xgerman
20:26:30 <rm_work> are the api patches all in a chain? I forgot to check
20:26:31 <rm_work> if not, I would recommend we fix it so they are
20:26:53 <xgerman> +1
20:26:55 <nmagnezi> rm_work, yup that would probably make it more easy to review.
20:26:57 <johnsom-alt> Yes, they are in a patch chain
20:26:59 <rm_work> ok
20:27:07 <rm_work> then you should be able to just grab the tail
20:27:10 <sshank> nmagnezi, Right now the patches are in a dependent chain to help solve functional tests. To use them, probably using the last will pick the others as well.
20:27:13 <rm_work> *checkout*, not cherry-pick
20:27:14 <johnsom-alt> So, you can checkout the last patch in the chain that you are interested in
20:27:23 <nmagnezi> xgerman, can you please link the patch so I'll keep track ?
20:27:40 <nmagnezi> rm_work, checkout, correct. sorry.
20:27:55 <johnsom-alt> I setup a devstack, usually with master
20:28:13 <sshank> nmagnezi, This is the tail: https://review.openstack.org/#/c/406328/
20:28:15 <xgerman> https://review.openstack.org/#/c/418530/
20:28:16 <johnsom-alt> Then, go into /opt/stack/octavia and checkout the last patch in the chain I am interested in.
20:28:43 <johnsom-alt> Then I do a "python(sometimes 3) setup.py install" as root
20:29:01 <johnsom-alt> Then, I use screen -r to restart the o-* services impacted.
20:29:26 <nmagnezi> johnsom-alt, aye, but how can I send the api call directly to octavia?
20:29:31 <johnsom-alt> I then use curl to do testing
20:29:40 <johnsom-alt> #link https://gist.github.com/sbalukoff/e6cd600b4a12ee582f5e
20:29:43 <rm_work> nice
20:29:43 <nmagnezi> no creds or anything?
20:29:48 <johnsom-alt> That is a list of examples for the old v1 API
20:30:00 <rm_work> that's for octavia
20:30:07 <johnsom-alt> You will need to adjust them for the new v2 API
20:30:28 <johnsom-alt> No creds are needed if you have "noauth" in your octavia.conf
20:30:38 <rm_work> for octavia I use: https://gist.github.com/rm-you/e1c2bf33aa570e310b1cdc7ebdd5dc2e
20:30:44 <rm_work> modified to use auth
20:31:09 <nmagnezi> thank you for the examples guys, they look really useful
20:31:12 <nmagnezi> :)
20:31:48 <johnsom-alt> You can also use the postman extension for chrome and build up a library of REST calls.
20:32:05 <johnsom-alt> As an alternative to using curl
20:32:05 <rm_work> but yeah, those aren't for neutron-lbaas, I'd need to look at modifying them for that
20:32:55 <johnsom-alt> #link https://www.getpostman.com/
20:33:02 <johnsom-alt> For those interested in the postman option
20:33:02 <rm_work> oh, off that topic: I'm working on a networking driver to allow for using FLIPs internally instead of AAP to handle failovers, for the case where you can't guarantee all amps will be on the same L2 net -- if this is of interest to anyone, let me know
20:33:03 <nmagnezi> I think curl is fine, the examples you pasted can get me started with this
20:33:45 <johnsom-alt> nmagnezi Great
20:34:05 <johnsom-alt> jniesz I think what rm_work is talking about might interest you as well
20:35:23 <jniesz> on our new networking design, we are looking to do a pure L3 design
20:36:39 <jniesz> so depending what rack an amphora lands, it would be a different L2 segment
20:36:40 <rm_work> I'm interested in how you're aiming to accomplish that
20:36:58 <rm_work> We have everything in L3s as well
20:37:00 <nmagnezi> +1
20:37:39 <rm_work> Basically the same issue
20:37:42 <jniesz> was thinking of using anycast with ECMP, and then the distributor would be something like Quagga that updates bgp routes
20:37:46 <jniesz> for teh amphoras
20:37:54 <rm_work> OK, so you're using a distributor
20:38:33 <johnsom-alt> jniesz So you are looking at the Active/Active topology more than the others?
20:39:01 <jniesz> the distributor would not be in data plane
20:39:08 <jniesz> just used to update routes to our underlay
20:39:26 <jniesz> yes, looking on modeling this and getting a PoC
20:39:41 <rm_work> were you planning on putting it upstream?
20:40:07 <jniesz> yes, definitely something we would wan to get upstreamed
20:40:09 <rm_work> this seems like a similar problem that we are solving, so it might be possible to collaborate
20:40:16 <jniesz> right now we are in the very early stages of modeling it out
20:40:30 <rm_work> ok
20:40:41 <jniesz> but definitely would like to work with you on the design
20:40:48 <jniesz> it seems to solve a lot of issues
20:40:59 <jniesz> plus it supports IPv6
20:41:05 <johnsom-alt> Cool.  If we can help, please let us know.  We can also have brainstorming meetings and/or etherpads to collaborate with
20:41:32 <xgerman> +1
20:41:34 <rm_work> OK. If you have any preliminary documents about the design I would love to take a look with our networking guys and see if this approach would work for us as well
20:42:33 <jniesz> I can put together some docs, for a follow up discussion
20:42:46 <rm_work> awesome
20:43:00 <johnsom-alt> +1
20:44:21 <johnsom-alt> Side topic, I forgot to mention I am also working on a patch for how we handle project_id through the API.  It should fix some issues out there with delete and the quota patch.
20:44:44 <johnsom-alt> #link https://bugs.launchpad.net/octavia/+bug/1624145
20:44:44 <openstack> Launchpad bug 1624145 in octavia "Octavia should ignore project_id on API create commands (except load_balancer)" [High,New] - Assigned to Michael Johnson (johnsom)
20:44:59 <johnsom-alt> Any other questions/topics?
20:46:12 <johnsom-alt> Ok, thanks for joining today and all of your great work on Octavia!
20:46:25 <jniesz> thanks
20:46:29 <rm_work> o/
20:46:34 <johnsom-alt> #endmeeting