20:00:29 <johnsom> #startmeeting Octavia
20:00:30 <openstack> Log:            http://eavesdrop.openstack.org/meetings/octavia__/2017/octavia__.2017-12-20-20.00.log.html
20:00:31 <openstack> Meeting started Wed Dec 20 20:00:29 2017 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:34 <openstack> The meeting name has been set to 'octavia'
20:00:35 <rm_mobile> o/
20:00:40 <johnsom> Try that again without the type-o....
20:00:40 <xgerman_> o/
20:01:00 <johnsom> Hi folks
20:01:00 <longstaff> hi
20:01:16 <johnsom> #topic Announcements
20:01:28 <jniesz> hi
20:01:46 <johnsom> I plan to cancel the weekly IRC meeting next week.  We will resume 1/3/18.
20:01:55 <xgerman_> +1
20:01:59 <johnsom> Many folks are taking some time off at the end of the year.
20:02:08 <rm_mobile> Yep
20:02:16 <johnsom> I will send out an e-mail after the meeting.
20:02:41 <johnsom> Also news, freenode (the IRC host for OpenStack) had a spam issue over the weekend
20:02:52 <rm_mobile> Lol yes
20:02:54 <rm_mobile> Such spam
20:02:59 <johnsom> There were offensive comments posted to rooms and they were direct messaging folks.
20:03:33 <johnsom> Because of that you now need to be registered with freenode and logged in to post in some channels and to direct message folks.
20:03:58 <johnsom> I know some folks didn't get the notification of the change and were having trouble with IRC.
20:04:06 <nmagnezi> o/
20:04:18 <johnsom> Let me know if you have folks having trouble and I can help get them setup on freenode.
20:04:49 <johnsom> There was a summary of the "1 year release cycle" discussion posted to the mailing list:
20:04:56 <johnsom> #link http://lists.openstack.org/pipermail/openstack-dev/2017-December/125688.html
20:05:20 <johnsom> At this point it seems like an ongoing discussion, but thought I would keep you posted.
20:05:51 <nmagnezi> thanks for that url.
20:05:56 <xgerman_> my feeling it’s a done deal
20:06:19 <johnsom> Final announcement I have this week, we had a video conference hosted by RedHat to talk about the provider drivers.  It was announced on the mailing list.  There is a short summary of topics here:
20:06:25 <johnsom> #link https://etherpad.openstack.org/p/octavia-providers-queens
20:07:03 <johnsom> xgerman_ Yeah, I don't know.  There is another 30+ message chain that has started up, so...
20:07:04 <nmagnezi> thanks for all attendees. i think we have a very good discussion.
20:07:16 <johnsom> +1
20:07:24 <johnsom> Any other announcements today?
20:07:26 <nmagnezi> s/have/had
20:07:33 <nmagnezi> it's getting late for me. sorry :)
20:07:46 <johnsom> #topic Brief progress reports / bugs needing review
20:07:59 <johnsom> I have been focusing on Active/Active patches this week.
20:08:14 <johnsom> I have a data model patch up for review and have started on the amphora driver patch.
20:08:35 <johnsom> Mostly this is a breakdown of one of the older patches that was pretty large and needed some love.
20:08:59 <johnsom> Plus many reviews in the Active/Active space.
20:09:10 <nmagnezi> #link https://review.openstack.org/#/c/529191/
20:09:15 <johnsom> I also reviewed the QoS again today.  Looks pretty good to me.
20:09:15 <nmagnezi> #link https://review.openstack.org/#/c/528850/
20:09:39 <johnsom> thanks nmagnezi
20:09:59 <johnsom> Any other progress updates?
20:10:25 <bar_> octavia client for qos is ready
20:10:26 <bar_> https://review.openstack.org/#/c/526217/
20:10:54 <johnsom> Oh, cool.  I will check out the update on that
20:11:06 <bar_> great
20:11:18 <johnsom> It was good last time I checked though you couldn't delete the policy, which I expect is what you fixed.
20:11:28 <bar_> right
20:11:38 <johnsom> #link https://review.openstack.org/#/c/526217/
20:11:44 <johnsom> ^^^ get that in the minutes.
20:12:02 <johnsom> #topic Heat updates for Octavia
20:12:09 <johnsom> #link https://bugs.launchpad.net/heat/+bug/1737567
20:12:10 <openstack> Launchpad bug 1737567 in OpenStack Heat "Direct support for Octavia LBaaS API" [Medium,New] - Assigned to Rabi Mishra (rabi)
20:12:27 <johnsom> There is a bug open to update Heat for the new Octavia endpoint.
20:13:12 <johnsom> The author is hoping to drum up support with "affects me" votes on the bug. So if you have an interest in Heat getting updated please voice your interest on the bug.
20:14:01 <johnsom> #topic Open Discussion
20:14:27 <johnsom> I didn't have anymore agenda items as I wasn't sure what the turnout was going to be.  Are there other topics we would like to continue?
20:14:34 <kpalan1> we are planning to add the octavia v2 api support in fog-openstack gem
20:14:57 <johnsom> Good question.
20:15:33 <johnsom> I don't know anyone working on that currently
20:15:38 <kpalan1> an issue is created on their github.
20:15:46 <johnsom> kpalan1 Are you able to help with that?
20:16:05 <kpalan1> yes i will working on it
20:16:07 <rm_work> that was the way I read it :P
20:16:12 <rm_work> "we are planning to add" :)
20:16:23 <jniesz> yes, we would like to contribute that
20:16:25 <rm_work> also: +A'd the QoS patch
20:16:30 <johnsom> Oh, oppps, got distracted looking it up
20:16:31 <xgerman_> sweet
20:16:53 <johnsom> Cool, it looks like it currently only has lbaasv1 support....  Sad face
20:17:40 <johnsom> Oh, maybe not, I see it in the"requests" just not the models
20:17:49 <kpalan1> waiting for active-active work to complete , we will be starting soon to add octavia v2 api support there, we need it internally forone of our chef based tool
20:17:57 <johnsom> kpalan1 Please feel free to ping us if you run into questions, etc.
20:18:12 <kpalan1> sure, thanks
20:18:14 <bar_> There're 2 issues/proposal that haven't been resolved in prior meeting: (1) the independent member API (no pool id)
20:18:35 <bar_> (2) bind amphora agent patch
20:19:16 <rm_work> 1) I think we just need to vote on whether we think it will ever be useful to have shared member objects
20:19:17 <nmagnezi> rm_work, ^^ started to sleep in normal hours.. so we did not discuss this again :<
20:19:29 <johnsom> Right.  Since we do have a good group here today, let's start at the top
20:19:42 <rm_work> nmagnezi: i told people that going to a normal schedule would not be a *good thing* for work :/
20:19:55 <rm_work> but no one listens
20:20:04 <johnsom> Hahaha
20:20:35 <johnsom> So, independent members...
20:20:48 <johnsom> bar_ Do you want to give a quick summary again?
20:21:20 <bar_> k, currently we access in octavia api to members, only by specifying both pool_id and member_id
20:21:38 <bar_> member_id is unique, so why not ADD another API, to access by member_id alone
20:21:50 <bar_> that's the proposal
20:22:01 <rm_work> yeah I think this is full circle, right?
20:22:17 <rm_work> doing /v2.0/lbaas/member/
20:22:32 <rm_work> because ... we don't really need to do shared members ever IMO
20:22:47 <rm_work> I would agree with this idea, don't need to know a pool_id to look up a member
20:23:22 <xgerman_> so we only wanto to do GET and LIST ?
20:23:35 <xgerman_> (read only)
20:23:38 <rm_work> hmm
20:23:47 <rm_work> I mean, I actually don't know why we couldn't do POST
20:23:48 <johnsom> I think there was a concern raised before about the relationship with pool and member today, that deleting a pool currently deletes it's members too.  xgerman_ is that right?
20:23:52 <nmagnezi> xgerman_, why not update as well?
20:24:02 <rm_work> if you pass a pool_id
20:24:13 <xgerman_> yes, we cascade the pool deletion to members
20:24:26 <rm_work> listeners are a sub-object on a LB, and those aren't *under* LB
20:24:31 <xgerman_> but if we only provide read only I see that less of a concern
20:24:35 <rm_work> and they can be cascade deleted as well
20:24:57 <rm_work> I'm not sure what the cascade deletion has to do with it
20:25:06 <johnsom> I will say that we would have to maintain the current API paths, etc. for backward compatibility.  Otherwise we are talking about LBaaSv3, which I really don't want to consider right now due to all of the other work going on.
20:25:19 <rm_work> lol yes
20:25:33 <rm_work> so we're talking about just adding another resource
20:25:38 <rm_work> like, member_standalone
20:25:42 <rm_work> at /member/
20:26:02 <bar_> technically, /members/
20:26:15 <rm_work> err, yeah i always forget if our resource names are plural in the API >_>
20:26:46 <xgerman_> since we could spend our time doing other things - do we have a use case why we need that?
20:26:56 <rm_work> it's ... easier to access? <_<
20:26:57 <rm_work> dunno
20:27:04 <johnsom> You would have to make pool_id mandatory on the member create calls
20:27:06 <rm_work> i'm just saying i'd vote to allow that
20:27:16 <rm_work> not that we should prioritize it
20:27:18 <johnsom> xgerman_ Very good question
20:27:34 <rm_work> if someone wants to spend their time doing something though, I can't stop them
20:27:48 <bar_> xgerman_, I don't see much use, if members are not to be shared.
20:27:50 <rm_work> that's the point of open source, and why companies hire us anyway -- to set priorities
20:28:09 <bar_> It would be easier to access, that's it.
20:28:15 <rm_work> yep
20:28:17 <johnsom> Agreed, but it would be another distraction from getting our major goals for the release done (act/act, drivers, flavor)
20:28:33 <xgerman_> yeah, even if we don;t write it we will need to review it
20:28:39 <nmagnezi> so we can set it as "wish list" or something
20:28:45 <xgerman_> yep
20:28:59 <rm_work> i'm just saying, if i saw code pop up that does this, I'd review it and be willing to +2 if it's good
20:29:08 <nmagnezi> rm_work, +1
20:29:10 <rm_work> i think the point of the question was just "is this OK?"
20:29:48 <bar_> Approved then?
20:29:49 <johnsom> Does this reach the spec bar or just an RFE?
20:29:58 <nmagnezi> i *think* we kinda all agree that read only direct access to members is okay, it's just not a prio
20:30:10 <xgerman_> RFE — did we ever figure out versioning?
20:30:37 <johnsom> xgerman_ Like API micro-versioning?
20:30:52 <xgerman_> like a client knowing that /members is available
20:31:09 <xgerman_> (without tetsing every new path)
20:32:14 <johnsom> xgerman_ API discovery is still up in the air last time I checked the api-wg.  We would need to change the client to support this.
20:32:42 <xgerman_> ok, so we should threat lightly on API extensions
20:32:52 <xgerman_> just my 2cts
20:32:55 <johnsom> #link https://specs.openstack.org/openstack/api-wg/guidelines/discoverability.html
20:32:59 <johnsom> Big fat TODO still
20:33:53 <rm_work> >_>
20:34:05 <rm_work> we so we DO have a version bit
20:34:07 <xgerman_> yeah, I have seen too many clients relying on the user to define what’s possible — hate to see a —use-member-direct flag
20:34:25 <rm_work> but i imagine the client could try and fallback
20:34:48 <johnsom> We do have a version that would increment for this enhancement
20:34:51 <nmagnezi> rm_work, if a pool id was not provided, how should the client fall back?
20:35:00 <rm_work> nmagnezi: ah good point
20:35:12 <nmagnezi> :)
20:35:13 <rm_work> so if no pool-id is provided and the new endpoint isn't there.... <_<
20:35:22 <rm_work> then fail
20:35:26 <rm_work> I guess
20:35:29 <johnsom> Yep
20:36:02 <xgerman_> ok, so we increment our version - client checks that and acts accordingly
20:36:07 <xgerman_>20:36:59 <nmagnezi> i think that API versioning is a broader topic. for example we can say similar things about the upcoming QoS support
20:37:09 <johnsom> Right.  <note cores need to watch for API additions and make sure the version minor updates>
20:37:10 <bar_> +1
20:37:22 <johnsom> Yep
20:37:32 <johnsom> We have overlooked that recently
20:37:43 <xgerman_> +1
20:37:49 * johnsom slaps his own wrist
20:38:02 <nmagnezi> johnsom, not we have two incentives not to :)
20:39:10 <xgerman_> we should also add that to the API docs so someone knows which version has which AI
20:39:13 <xgerman_> calls
20:39:27 <johnsom> I will take an action to go update the version starting with QoS.
20:39:55 <nmagnezi> xgerman_, do we have an API call to fetch the version number?
20:39:57 <xgerman_> +1 (I can see us also lumoing all changes for a cycle together)
20:40:05 <johnsom> nmagnezi Yes
20:40:14 <xgerman_> GET /
20:40:57 <johnsom> #link https://developer.openstack.org/api-ref/load-balancer/#api-discovery
20:41:23 <rm_work> xgerman_: agreed, for one cycle it's probably fine to lump stuff
20:41:37 <rm_work> and for those of us on master, "missing" features is less of a problem :P
20:41:50 <xgerman_> but with one year cycles looming I would increment more often
20:42:10 <johnsom> Though I wonder if we should not have a numerical version here as well.  Will have to go back and double check the api-wg
20:42:13 <rm_work> yeah we should just buckle down and be good about doing it i guess
20:42:22 <xgerman_> +1
20:42:25 <johnsom> +1
20:42:37 <johnsom> You guys should fire your PTL
20:42:42 <johnsom> Grin
20:42:46 <rm_work> lol nope
20:42:49 <rm_work> 4 more years! :P
20:42:54 <nmagnezi> Ha
20:43:02 <nmagnezi> +1 Adam
20:43:04 <johnsom> Oye
20:43:16 <johnsom> Anyway, let's summary here....
20:43:17 <rm_work> Yeah we could do version increments there too probably
20:43:46 <rm_work> cause just having "last updated date" is kinda weird
20:43:58 <rm_work> id=v2.0 is also weird
20:44:14 <johnsom> RFE it.  Ok to add read-only paths, remember to bump the api minor version as a start
20:44:15 <rm_work> shouldn't we have like... ['major', 'minor'] at a minimum?
20:44:18 <johnsom> right?
20:44:39 <johnsom> rm_work Yeah, I am wondering too.  I know this was a copy of neutron-lbaas, but that doesn't mean it's right....
20:44:55 <rm_work> yeah certainly read-only is easier, but i still don't see why it couldn't be a full CRUD resource
20:45:04 <xgerman_> yep, but major is v2
20:45:20 <rm_work> johnsom: i wonder what we break if we add major/minor and just point it to the ID
20:45:33 <rm_work> or maybe actually
20:45:49 <rm_work> major=2 minor=0
20:45:51 <rm_work> would be "now"
20:45:58 <xgerman_> indeed
20:45:59 <johnsom> Well, if you ask too many questions here your answer will come to micro-versions....
20:46:03 <rm_work> heh
20:46:14 <rm_work> k i'd probably be in favor of major/minor/micro
20:46:15 <rm_work> or something
20:46:20 <rm_work> what's the third one
20:46:31 <xgerman_> nah, we just increase minor sequentially
20:46:38 <johnsom> https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html#version-discovery
20:46:41 <johnsom> #link https://specs.openstack.org/openstack/api-wg/guidelines/microversion_specification.html#version-discovery
20:46:48 <rm_work> kk
20:46:50 <rm_work> so we do THAT
20:46:59 <rm_work> that works
20:47:01 <rm_work> it has id
20:47:04 <cgoncalves> +1 for microversion
20:47:07 <rm_work> but also the real stuff
20:47:11 <johnsom> So...  We are compliant today, just not supporting microversions yet
20:47:12 <rm_work> wooo standards
20:47:21 <rm_work> yep, +1 implement microversion
20:47:35 <rm_work> probably we should do that inside the cycle (it looks trivial)
20:47:36 <johnsom> id is the same as we have now
20:47:48 <xgerman_> ok, microversions it is
20:47:55 <rm_work> so we have a first microversion for QoS and whatever else
20:48:08 <johnsom> Oye, ok...  Please read the whole doc before deciding we want to jump on that.
20:48:10 <rm_work> and then we can try to be good about incrementing on API changes from now
20:48:15 <rm_work> ok will read
20:48:17 <johnsom> It can also make the client a bit of hell
20:48:31 <rm_work> should we do an official vote in January when we're all back?
20:48:47 <johnsom> Yes, let's hold off on the microversion stuffs
20:49:17 <xgerman_> can’t we just do server and ignore client?
20:49:26 <johnsom> bar_ Did you get an answer out of that on the member API?
20:49:31 <rm_work> i mean ... the POINT is for the client, isn't it?
20:49:38 <johnsom> rm_work +1
20:49:38 <bar_> hmm, why only read-only path?
20:49:47 <rm_work> yeah i'm not sure i follow read-only either
20:49:50 <xgerman_> I don’t like the ACCEPT Header stuff
20:49:52 <rm_work> i would just do it as a full thing
20:49:58 <rm_work> and we'll review
20:50:05 <rm_work> i think it'll be fine
20:50:12 <rm_work> when people see that it works
20:50:21 <rm_work> again though, some other stuff is probably higher priority
20:50:31 <rm_work> like finishing our tempest stuff (did you say you were going to look at that?)
20:50:35 <johnsom> Ok, maybe split the patches just in case someone comes up with a reason why the updates are a bad idea
20:50:38 <bar_> I am.
20:50:42 <rm_work> kk
20:50:47 <bar_> It's... neglected...
20:51:05 <johnsom> Oh yes, tempest is important.
20:51:14 <bar_> I need to re-write some patches.
20:51:20 <johnsom> It is a community goal.  I have updated our status to in-progress.
20:51:32 <bar_> can we deprecate octavia/tests/tempest?
20:51:40 <bar_> *deprecate=delete
20:51:48 <johnsom> Yes, it goes way with the tempest plugin patch
20:52:06 <johnsom> Though we need to time it with the overall tempest plugin switch over
20:52:28 <bar_> Is it for Queens?
20:52:49 <johnsom> We need to have a working tempest plugin for queens, yes
20:53:10 <johnsom> #link https://governance.openstack.org/tc/goals/queens/split-tempest-plugins.html
20:53:27 <bar_> ok. API proposal is approved? (though not prioritized)
20:54:01 <johnsom> Well, technically you would create the RFE story and we would approve it there, but essentially yes.
20:54:09 <bar_> I see.
20:54:17 <johnsom> After the tempest plugin is done....  GRIN
20:54:19 <johnsom> just kidding
20:54:23 <bar_> :-{
20:54:25 <bar_> :-)
20:54:33 <bar_> I'm working on it
20:54:38 <johnsom> THANK YOU
20:54:39 <bar_> bind amphora agent?
20:55:02 <johnsom> Ok, six minutes, bind amphora agent.  This was about a better way to do it right?
20:55:22 <bar_> rm_work, nmagnezi ?
20:55:41 <johnsom> Last I remember rm_work was going to comment/help with the "better" way
20:55:57 <bar_> yeah, but nmagnezi and I have reservations....
20:56:12 <rm_work> better way should just be to finally implement the amphora-api for update-config
20:56:20 <nmagnezi> basically bar's current implementation is to create the neutron port before we call "nova boot" so we'll know the IP in advance and configure amphora-agent.conf
20:56:30 <rm_work> and our initial connection to the amp can do a config update to set the right listening IP
20:56:49 <rm_work> yeah, and that's untenable for some types of networks
20:56:58 <nmagnezi> rm_work does not like that implementation and prefers an agent restart API call to update the file and reload config
20:57:10 <johnsom> nmagnezi We didn't do that because it doesn't work in some deployments if I remember
20:57:13 <nmagnezi> rm_work, what types of networks ?: )
20:57:22 <bar_> can we have different flows for different types of networks? Is it... done?
20:57:33 <rm_work> the kind where you can't choose the network that gets plugged to a new VM :)
20:57:39 <nmagnezi> johnsom, correct. rm_work's deployment does not support it for example.
20:58:01 <nmagnezi> rm_work, so how does nova knows? :) just wondering..
20:58:09 <rm_work> nova figures it out internally
20:58:20 <rm_work> based on the HV that it schedules, it also schedules a network
20:58:25 <johnsom> I think originally nova-networks had an issue with it too.  Like you couldn't boot without at least one nic
20:58:54 <cgoncalves> johnsom: true
20:58:55 <nmagnezi> rm_work, is that a thing in nova? or is it an internal solution you guys have?
20:59:00 <johnsom> nova networks is dead now BTW
20:59:00 <rm_work> I know it's not just me, too -- i talked to at least one other deployer that had the same issue
20:59:24 <rm_work> in our case it is a custom scheduler in nova, yes
20:59:33 <nmagnezi> johnsom, good riddance (nova network)
20:59:46 <xgerman_> we also talked about taking it from DHCP/cloud init/?
20:59:55 <rm_work> yeah that might be possible
20:59:57 <johnsom> I will say, we still need the amp config update API.  That is still a super valid need.
21:00:10 <xgerman_> but we shouldn’t comingle the two
21:00:12 <rm_work> yes, and my point was that we should just take this opportunity to do it and use it
21:00:21 <nmagnezi> johnsom, for health manager list? (trying to recall)
21:00:25 <rm_work> yes
21:00:29 <johnsom> rm_work I know that is what was originally discussed
21:00:49 <johnsom> Ugh, meeting time is up....
21:00:54 <johnsom> #endmeeting