Wednesday, 2017-12-20

*** fnaval has joined #openstack-lbaas00:00
*** fnaval has quit IRC00:03
openstackgerritBar RH proposed openstack/octavia-tempest-plugin master: Update README
openstackgerritBar RH proposed openstack/octavia-tempest-plugin master: Update README
bar_{dayou, johnsom, rm_work} would you mind revoting on: ?00:26
*** threestrands has joined #openstack-lbaas00:26
*** threestrands has quit IRC00:26
*** threestrands has joined #openstack-lbaas00:26
*** fnaval has joined #openstack-lbaas00:31
openstackgerritBar RH proposed openstack/octavia master: Fail-proof VIP deallocation task
*** fnaval has quit IRC00:34
openstackgerritMichael Johnson proposed openstack/octavia master: ACTIVE-ACTIVE: Amphora driver updates
*** bar_ has quit IRC00:47
*** bzhao has quit IRC01:10
*** bzhao has joined #openstack-lbaas01:12
*** bbzhao has joined #openstack-lbaas01:12
*** yamamoto has joined #openstack-lbaas01:21
*** yamamoto has quit IRC01:21
*** annp has joined #openstack-lbaas01:23
openstackgerritZhaoBo proposed openstack/octavia master: Extend api to accept qos_policy_id
bzhaorm_work, johnsom please have a look about if you are free, thanks..01:26
openstackgerritZhaoBo proposed openstack/octavia master: Support UDP load balancing
*** yamamoto has joined #openstack-lbaas01:57
*** threestrands has quit IRC03:03
*** threestrands has joined #openstack-lbaas03:04
*** threestrands has quit IRC03:04
*** threestrands has joined #openstack-lbaas03:04
*** threestrands has quit IRC03:05
*** threestrands has joined #openstack-lbaas03:06
*** threestrands has quit IRC03:06
*** threestrands has joined #openstack-lbaas03:06
*** threestrands has quit IRC03:07
*** threestrands has joined #openstack-lbaas03:07
*** krypto has joined #openstack-lbaas03:12
*** threestrands_ has joined #openstack-lbaas03:57
*** threestrands has quit IRC03:57
*** threestrands_ has quit IRC03:58
*** threestrands_ has joined #openstack-lbaas03:59
*** yamamoto_ has joined #openstack-lbaas04:08
*** yamamoto has quit IRC04:11
*** sanfern has joined #openstack-lbaas04:18
*** sanfern has quit IRC04:22
*** armax has quit IRC04:44
*** armax has joined #openstack-lbaas04:51
*** armax has quit IRC04:51
*** links has joined #openstack-lbaas04:57
*** yamamoto_ has quit IRC05:10
*** yamamoto has joined #openstack-lbaas05:32
*** AlexeyAbashkin has joined #openstack-lbaas05:37
*** AlexeyAbashkin has quit IRC05:42
*** krypto has quit IRC06:10
openstackgerritOpenStack Proposal Bot proposed openstack/neutron-lbaas master: Imported Translations from Zanata
*** gcheresh_ has joined #openstack-lbaas06:18
*** ianychoi has quit IRC06:44
*** yamamoto has quit IRC06:49
*** threestrands_ has quit IRC06:57
*** yamamoto has joined #openstack-lbaas07:04
*** gcheresh_ has quit IRC07:06
*** rcernin has quit IRC07:08
*** yamamoto has quit IRC07:09
openstackgerritAlex Stafeyev proposed openstack/octavia-tempest-plugin master: Added session persistence test.
*** yamamoto has joined #openstack-lbaas07:20
*** gcheresh_ has joined #openstack-lbaas07:20
*** yamamoto has quit IRC07:24
*** gcheresh_ has quit IRC07:25
openstackgerritAlex Stafeyev proposed openstack/octavia-tempest-plugin master: Added session persistence test.
*** sapd has quit IRC07:40
openstackgerritAlex Stafeyev proposed openstack/octavia-tempest-plugin master: Added session persistence test.
*** sapd has joined #openstack-lbaas07:41
*** gcheresh_ has joined #openstack-lbaas07:48
*** yamamoto has joined #openstack-lbaas08:04
openstackgerritGuoqiang Ding proposed openstack/octavia master: Fix the misspelling of "listener"
*** yamamoto has quit IRC08:09
*** AlexeyAbashkin has joined #openstack-lbaas08:19
*** Alex_Staf_ has joined #openstack-lbaas08:22
*** sapd has quit IRC08:33
*** yamamoto has joined #openstack-lbaas08:34
*** sapd has joined #openstack-lbaas08:35
*** yamamoto has quit IRC08:39
*** yamamoto has joined #openstack-lbaas08:50
*** yamamoto has quit IRC08:54
*** yamamoto has joined #openstack-lbaas08:56
*** yamamoto has quit IRC08:56
*** gcheresh_ has quit IRC09:04
*** yamamoto has joined #openstack-lbaas09:05
openstackgerritBernard Cafarelli proposed openstack/neutron-lbaas master: Use generic netcat syntax in base scenario
*** bcafarel has quit IRC10:15
*** krypto has joined #openstack-lbaas10:24
openstackgerritNir Magnezi proposed openstack/octavia-dashboard master: Test requirements cleanup
*** salmankhan has joined #openstack-lbaas10:26
*** gcheresh_ has joined #openstack-lbaas10:39
*** gcheresh_ has quit IRC10:40
*** bcafarel has joined #openstack-lbaas10:42
*** annp has quit IRC10:53
*** krypto has quit IRC10:56
*** krypto has joined #openstack-lbaas10:57
*** yamamoto_ has joined #openstack-lbaas11:08
*** AlexeyAbashkin has quit IRC11:08
*** gcheresh_ has joined #openstack-lbaas11:08
*** yamamoto has quit IRC11:11
*** reedip has quit IRC11:21
openstackgerritNir Magnezi proposed openstack/octavia-dashboard master: Test requirements cleanup
*** reedip has joined #openstack-lbaas11:35
*** gcheresh_ has quit IRC11:43
*** gcheresh_ has joined #openstack-lbaas11:50
*** AlexeyAbashkin has joined #openstack-lbaas11:52
*** salmankhan has quit IRC12:04
*** salmankhan has joined #openstack-lbaas12:06
bcafarelnmagnezi: zuul is happy with now :)12:30
nmagnezibcafarel, looks good :-)13:01
*** gcheresh_ has quit IRC13:03
*** dmellado has quit IRC13:05
*** openstackgerrit has quit IRC13:13
-openstackstatus- NOTICE: gerrit is being restarted due to extreme slowness13:14
*** dmellado has joined #openstack-lbaas13:15
*** dmellado has quit IRC13:19
*** dmellado has joined #openstack-lbaas13:21
*** yamamoto_ has quit IRC13:34
*** krypto has quit IRC14:01
*** krypto has joined #openstack-lbaas14:02
*** krypto has quit IRC14:02
*** krypto has joined #openstack-lbaas14:02
*** devfaz has quit IRC14:07
*** logan- has quit IRC14:08
*** logan- has joined #openstack-lbaas14:10
*** yamamoto has joined #openstack-lbaas14:10
*** yamamoto has quit IRC14:23
*** gcheresh_ has joined #openstack-lbaas14:29
*** aojea has joined #openstack-lbaas14:38
*** jniesz has joined #openstack-lbaas14:50
*** aojea has quit IRC14:52
*** gcheresh_ has quit IRC14:54
*** yamamoto has joined #openstack-lbaas15:23
*** yamamoto has quit IRC15:35
*** armax has joined #openstack-lbaas15:44
*** pcaruana has joined #openstack-lbaas15:51
*** gcheresh_ has joined #openstack-lbaas15:58
*** gcheresh_ has quit IRC16:12
*** AlexeyAbashkin has quit IRC16:37
*** links has quit IRC16:44
*** Alex_Staf_ has quit IRC16:51
*** krypto has quit IRC16:52
*** krypto has joined #openstack-lbaas16:53
*** krypto has quit IRC16:53
*** krypto has joined #openstack-lbaas16:53
*** salmankhan has quit IRC17:29
*** salmankhan has joined #openstack-lbaas17:37
*** bzhao has quit IRC17:52
*** bzhao has joined #openstack-lbaas17:53
*** sapd_ has joined #openstack-lbaas18:01
*** sapd has quit IRC18:01
*** openstackgerrit has joined #openstack-lbaas18:05
openstackgerritMerged openstack/neutron-lbaas master: Use generic netcat syntax in base scenario
*** AlexeyAbashkin has joined #openstack-lbaas18:08
*** AlexeyAbashkin has quit IRC18:13
*** krypto has quit IRC18:31
*** bbzhao has quit IRC18:31
*** krypto has joined #openstack-lbaas18:31
*** bbzhao has joined #openstack-lbaas18:32
*** salmankhan has quit IRC18:41
*** bar_ has joined #openstack-lbaas18:48
openstackgerritBar RH proposed openstack/octavia master: Remove reliance on NeutronException message field
nmagnezijohnsom, test ended successfully :D19:00
*** gcheresh_ has joined #openstack-lbaas19:03
openstackgerritBar RH proposed openstack/octavia-tempest-plugin master: Update README
*** pcaruana has quit IRC19:29
*** gcheresh_ has quit IRC19:39
*** gcheresh_ has joined #openstack-lbaas19:40
*** gcheresh_ has quit IRC19:44
*** pcaruana has joined #openstack-lbaas19:48
*** pcaruana has quit IRC19:54
*** rm_mobile has joined #openstack-lbaas19:56
*** longstaff has joined #openstack-lbaas19:58
johnsom#startmeeting Octavia ]20:00
openstackMeeting started Wed Dec 20 20:00:16 2017 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia ])"20:00
openstackThe meeting name has been set to 'octavia__'20:00
*** openstack changes topic to "Welcome to LBaaS / Octavia - Queens development is now open."20:00
openstackMeeting ended Wed Dec 20 20:00:24 2017 UTC.  Information about MeetBot at . (v 0.1.4)20:00
openstackMinutes (text):
johnsom#startmeeting Octavia20:00
openstackMeeting started Wed Dec 20 20:00:29 2017 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.20:00
*** openstack changes topic to " (Meeting topic: Octavia)"20:00
openstackThe meeting name has been set to 'octavia'20:00
johnsomTry that again without the type-o....20:00
johnsomHi folks20:01
johnsom#topic Announcements20:01
*** openstack changes topic to "Announcements (Meeting topic: Octavia)"20:01
johnsomI plan to cancel the weekly IRC meeting next week.  We will resume 1/3/18.20:01
johnsomMany folks are taking some time off at the end of the year.20:01
johnsomI will send out an e-mail after the meeting.20:02
johnsomAlso news, freenode (the IRC host for OpenStack) had a spam issue over the weekend20:02
rm_mobileLol yes20:02
rm_mobileSuch spam20:02
johnsomThere were offensive comments posted to rooms and they were direct messaging folks.20:02
*** kpalan1 has joined #openstack-lbaas20:03
johnsomBecause of that you now need to be registered with freenode and logged in to post in some channels and to direct message folks.20:03
johnsomI know some folks didn't get the notification of the change and were having trouble with IRC.20:03
johnsomLet me know if you have folks having trouble and I can help get them setup on freenode.20:04
johnsomThere was a summary of the "1 year release cycle" discussion posted to the mailing list:20:04
johnsomAt this point it seems like an ongoing discussion, but thought I would keep you posted.20:05
nmagnezithanks for that url.20:05
xgerman_my feeling it’s a done deal20:05
johnsomFinal announcement I have this week, we had a video conference hosted by RedHat to talk about the provider drivers.  It was announced on the mailing list.  There is a short summary of topics here:20:06
*** pcaruana has joined #openstack-lbaas20:06
johnsomxgerman_ Yeah, I don't know.  There is another 30+ message chain that has started up, so...20:07
nmagnezithanks for all attendees. i think we have a very good discussion.20:07
johnsomAny other announcements today?20:07
nmagneziit's getting late for me. sorry :)20:07
johnsom#topic Brief progress reports / bugs needing review20:07
*** openstack changes topic to "Brief progress reports / bugs needing review (Meeting topic: Octavia)"20:07
johnsomI have been focusing on Active/Active patches this week.20:07
johnsomI have a data model patch up for review and have started on the amphora driver patch.20:08
johnsomMostly this is a breakdown of one of the older patches that was pretty large and needed some love.20:08
johnsomPlus many reviews in the Active/Active space.20:08
johnsomI also reviewed the QoS again today.  Looks pretty good to me.20:09
johnsomthanks nmagnezi20:09
johnsomAny other progress updates?20:09
bar_octavia client for qos is ready20:10
johnsomOh, cool.  I will check out the update on that20:10
johnsomIt was good last time I checked though you couldn't delete the policy, which I expect is what you fixed.20:11
johnsom^^^ get that in the minutes.20:11
johnsom#topic Heat updates for Octavia20:12
*** openstack changes topic to "Heat updates for Octavia (Meeting topic: Octavia)"20:12
openstackLaunchpad bug 1737567 in OpenStack Heat "Direct support for Octavia LBaaS API" [Medium,New] - Assigned to Rabi Mishra (rabi)20:12
johnsomThere is a bug open to update Heat for the new Octavia endpoint.20:12
johnsomThe author is hoping to drum up support with "affects me" votes on the bug. So if you have an interest in Heat getting updated please voice your interest on the bug.20:13
*** pcaruana has quit IRC20:13
johnsom#topic Open Discussion20:14
*** openstack changes topic to "Open Discussion (Meeting topic: Octavia)"20:14
johnsomI didn't have anymore agenda items as I wasn't sure what the turnout was going to be.  Are there other topics we would like to continue?20:14
kpalan1we are planning to add the octavia v2 api support in fog-openstack gem20:14
johnsomGood question.20:14
johnsomI don't know anyone working on that currently20:15
kpalan1an issue is created on their github.20:15
johnsomkpalan1 Are you able to help with that?20:15
kpalan1yes i will working on it20:16
rm_workthat was the way I read it :P20:16
rm_work"we are planning to add" :)20:16
jnieszyes, we would like to contribute that20:16
rm_workalso: +A'd the QoS patch20:16
johnsomOh, oppps, got distracted looking it up20:16
johnsomCool, it looks like it currently only has lbaasv1 support....  Sad face20:16
johnsomOh, maybe not, I see it in the"requests" just not the models20:17
kpalan1waiting for active-active work to complete , we will be starting soon to add octavia v2 api support there, we need it internally forone of our chef based tool20:17
johnsomkpalan1 Please feel free to ping us if you run into questions, etc.20:17
kpalan1sure, thanks20:18
bar_There're 2 issues/proposal that haven't been resolved in prior meeting: (1) the independent member API (no pool id)20:18
bar_(2) bind amphora agent patch20:18
rm_work1) I think we just need to vote on whether we think it will ever be useful to have shared member objects20:19
nmagnezirm_work, ^^ started to sleep in normal hours.. so we did not discuss this again :<20:19
johnsomRight.  Since we do have a good group here today, let's start at the top20:19
*** krypto has quit IRC20:19
rm_worknmagnezi: i told people that going to a normal schedule would not be a *good thing* for work :/20:19
rm_workbut no one listens20:19
*** krypto has joined #openstack-lbaas20:20
*** krypto has quit IRC20:20
*** krypto has joined #openstack-lbaas20:20
johnsomSo, independent members...20:20
johnsombar_ Do you want to give a quick summary again?20:20
bar_k, currently we access in octavia api to members, only by specifying both pool_id and member_id20:21
bar_member_id is unique, so why not ADD another API, to access by member_id alone20:21
*** rm_mobile has quit IRC20:21
bar_that's the proposal20:21
rm_workyeah I think this is full circle, right?20:22
rm_workdoing /v2.0/lbaas/member/20:22
rm_workbecause ... we don't really need to do shared members ever IMO20:22
rm_workI would agree with this idea, don't need to know a pool_id to look up a member20:22
xgerman_so we only wanto to do GET and LIST ?20:23
xgerman_(read only)20:23
rm_workI mean, I actually don't know why we couldn't do POST20:23
johnsomI think there was a concern raised before about the relationship with pool and member today, that deleting a pool currently deletes it's members too.  xgerman_ is that right?20:23
nmagnezixgerman_, why not update as well?20:23
rm_workif you pass a pool_id20:24
xgerman_yes, we cascade the pool deletion to members20:24
rm_worklisteners are a sub-object on a LB, and those aren't *under* LB20:24
xgerman_but if we only provide read only I see that less of a concern20:24
rm_workand they can be cascade deleted as well20:24
rm_workI'm not sure what the cascade deletion has to do with it20:24
johnsomI will say that we would have to maintain the current API paths, etc. for backward compatibility.  Otherwise we are talking about LBaaSv3, which I really don't want to consider right now due to all of the other work going on.20:25
rm_worklol yes20:25
rm_workso we're talking about just adding another resource20:25
rm_worklike, member_standalone20:25
rm_workat /member/20:25
bar_technically, /members/20:26
rm_workerr, yeah i always forget if our resource names are plural in the API >_>20:26
xgerman_since we could spend our time doing other things - do we have a use case why we need that?20:26
rm_workit's ... easier to access? <_<20:26
johnsomYou would have to make pool_id mandatory on the member create calls20:27
rm_worki'm just saying i'd vote to allow that20:27
rm_worknot that we should prioritize it20:27
johnsomxgerman_ Very good question20:27
rm_workif someone wants to spend their time doing something though, I can't stop them20:27
bar_xgerman_, I don't see much use, if members are not to be shared.20:27
rm_workthat's the point of open source, and why companies hire us anyway -- to set priorities20:27
bar_It would be easier to access, that's it.20:28
johnsomAgreed, but it would be another distraction from getting our major goals for the release done (act/act, drivers, flavor)20:28
xgerman_yeah, even if we don;t write it we will need to review it20:28
nmagneziso we can set it as "wish list" or something20:28
rm_worki'm just saying, if i saw code pop up that does this, I'd review it and be willing to +2 if it's good20:28
nmagnezirm_work, +120:29
rm_worki think the point of the question was just "is this OK?"20:29
bar_Approved then?20:29
johnsomDoes this reach the spec bar or just an RFE?20:29
nmagnezii *think* we kinda all agree that read only direct access to members is okay, it's just not a prio20:29
xgerman_RFE — did we ever figure out versioning?20:30
johnsomxgerman_ Like API micro-versioning?20:30
xgerman_like a client knowing that /members is available20:30
xgerman_(without tetsing every new path)20:31
johnsomxgerman_ API discovery is still up in the air last time I checked the api-wg.  We would need to change the client to support this.20:32
xgerman_ok, so we should threat lightly on API extensions20:32
xgerman_just my 2cts20:32
johnsomBig fat TODO still20:32
rm_workwe so we DO have a version bit20:34
xgerman_yeah, I have seen too many clients relying on the user to define what’s possible — hate to see a —use-member-direct flag20:34
rm_workbut i imagine the client could try and fallback20:34
johnsomWe do have a version that would increment for this enhancement20:34
nmagnezirm_work, if a pool id was not provided, how should the client fall back?20:34
rm_worknmagnezi: ah good point20:35
rm_workso if no pool-id is provided and the new endpoint isn't there.... <_<20:35
rm_workthen fail20:35
rm_workI guess20:35
xgerman_ok, so we increment our version - client checks that and acts accordingly20:36
nmagnezii think that API versioning is a broader topic. for example we can say similar things about the upcoming QoS support20:36
johnsomRight.  <note cores need to watch for API additions and make sure the version minor updates>20:37
johnsomWe have overlooked that recently20:37
* johnsom slaps his own wrist20:37
nmagnezijohnsom, not we have two incentives not to :)20:38
xgerman_we should also add that to the API docs so someone knows which version has which AI20:39
johnsomI will take an action to go update the version starting with QoS.20:39
nmagnezixgerman_, do we have an API call to fetch the version number?20:39
xgerman_+1 (I can see us also lumoing all changes for a cycle together)20:39
johnsomnmagnezi Yes20:40
xgerman_GET /20:40
rm_workxgerman_: agreed, for one cycle it's probably fine to lump stuff20:41
rm_workand for those of us on master, "missing" features is less of a problem :P20:41
*** links has joined #openstack-lbaas20:41
xgerman_but with one year cycles looming I would increment more often20:41
johnsomThough I wonder if we should not have a numerical version here as well.  Will have to go back and double check the api-wg20:42
rm_workyeah we should just buckle down and be good about doing it i guess20:42
johnsomYou guys should fire your PTL20:42
rm_worklol nope20:42
rm_work4 more years! :P20:42
nmagnezi+1 Adam20:43
johnsomAnyway, let's summary here....20:43
rm_workYeah we could do version increments there too probably20:43
rm_workcause just having "last updated date" is kinda weird20:43
rm_workid=v2.0 is also weird20:43
johnsomRFE it.  Ok to add read-only paths, remember to bump the api minor version as a start20:44
rm_workshouldn't we have like... ['major', 'minor'] at a minimum?20:44
johnsomrm_work Yeah, I am wondering too.  I know this was a copy of neutron-lbaas, but that doesn't mean it's right....20:44
rm_workyeah certainly read-only is easier, but i still don't see why it couldn't be a full CRUD resource20:44
xgerman_yep, but major is v220:45
rm_workjohnsom: i wonder what we break if we add major/minor and just point it to the ID20:45
rm_workor maybe actually20:45
rm_workmajor=2 minor=020:45
rm_workwould be "now"20:45
johnsomWell, if you ask too many questions here your answer will come to micro-versions....20:45
rm_workk i'd probably be in favor of major/minor/micro20:46
rm_workor something20:46
rm_workwhat's the third one20:46
xgerman_nah, we just increase minor sequentially20:46
rm_workso we do THAT20:46
rm_workthat works20:46
rm_workit has id20:47
cgoncalves+1 for microversion20:47
rm_workbut also the real stuff20:47
johnsom So...  We are compliant today, just not supporting microversions yet20:47
rm_workwooo standards20:47
*** links has quit IRC20:47
rm_workyep, +1 implement microversion20:47
rm_workprobably we should do that inside the cycle (it looks trivial)20:47
johnsomid is the same as we have now20:47
xgerman_ok, microversions it is20:47
rm_workso we have a first microversion for QoS and whatever else20:47
johnsomOye, ok...  Please read the whole doc before deciding we want to jump on that.20:48
rm_workand then we can try to be good about incrementing on API changes from now20:48
rm_workok will read20:48
johnsomIt can also make the client a bit of hell20:48
rm_workshould we do an official vote in January when we're all back?20:48
johnsomYes, let's hold off on the microversion stuffs20:48
xgerman_can’t we just do server and ignore client?20:49
johnsombar_ Did you get an answer out of that on the member API?20:49
rm_worki mean ... the POINT is for the client, isn't it?20:49
johnsomrm_work +120:49
bar_hmm, why only read-only path?20:49
rm_workyeah i'm not sure i follow read-only either20:49
xgerman_I don’t like the ACCEPT Header stuff20:49
rm_worki would just do it as a full thing20:49
rm_workand we'll review20:49
rm_worki think it'll be fine20:50
rm_workwhen people see that it works20:50
rm_workagain though, some other stuff is probably higher priority20:50
rm_worklike finishing our tempest stuff (did you say you were going to look at that?)20:50
johnsomOk, maybe split the patches just in case someone comes up with a reason why the updates are a bad idea20:50
bar_I am.20:50
bar_It's... neglected...20:50
johnsomOh yes, tempest is important.20:51
bar_I need to re-write some patches.20:51
johnsomIt is a community goal.  I have updated our status to in-progress.20:51
bar_can we deprecate octavia/tests/tempest?20:51
johnsomYes, it goes way with the tempest plugin patch20:51
johnsomThough we need to time it with the overall tempest plugin switch over20:52
bar_Is it for Queens?20:52
johnsomWe need to have a working tempest plugin for queens, yes20:52
bar_ok. API proposal is approved? (though not prioritized)20:53
johnsomWell, technically you would create the RFE story and we would approve it there, but essentially yes.20:54
bar_I see.20:54
johnsomAfter the tempest plugin is done....  GRIN20:54
johnsomjust kidding20:54
bar_I'm working on it20:54
johnsomTHANK YOU20:54
bar_bind amphora agent?20:54
johnsomOk, six minutes, bind amphora agent.  This was about a better way to do it right?20:55
*** Alex_Staf has joined #openstack-lbaas20:55
bar_rm_work, nmagnezi ?20:55
johnsomLast I remember rm_work was going to comment/help with the "better" way20:55
bar_yeah, but nmagnezi and I have reservations....20:55
rm_workbetter way should just be to finally implement the amphora-api for update-config20:56
nmagnezibasically bar's current implementation is to create the neutron port before we call "nova boot" so we'll know the IP in advance and configure amphora-agent.conf20:56
rm_workand our initial connection to the amp can do a config update to set the right listening IP20:56
rm_workyeah, and that's untenable for some types of networks20:56
nmagnezirm_work does not like that implementation and prefers an agent restart API call to update the file and reload config20:56
johnsomnmagnezi We didn't do that because it doesn't work in some deployments if I remember20:57
nmagnezirm_work, what types of networks ?: )20:57
bar_can we have different flows for different types of networks? Is it... done?20:57
rm_workthe kind where you can't choose the network that gets plugged to a new VM :)20:57
nmagnezijohnsom, correct. rm_work's deployment does not support it for example.20:57
nmagnezirm_work, so how does nova knows? :) just wondering..20:58
rm_worknova figures it out internally20:58
rm_workbased on the HV that it schedules, it also schedules a network20:58
johnsomI think originally nova-networks had an issue with it too.  Like you couldn't boot without at least one nic20:58
cgoncalvesjohnsom: true20:58
nmagnezirm_work, is that a thing in nova? or is it an internal solution you guys have?20:58
johnsomnova networks is dead now BTW20:59
rm_workI know it's not just me, too -- i talked to at least one other deployer that had the same issue20:59
rm_workin our case it is a custom scheduler in nova, yes20:59
nmagnezijohnsom, good riddance (nova network)20:59
xgerman_we also talked about taking it from DHCP/cloud init/?20:59
rm_workyeah that might be possible20:59
johnsomI will say, we still need the amp config update API.  That is still a super valid need.20:59
xgerman_but we shouldn’t comingle the two21:00
rm_workyes, and my point was that we should just take this opportunity to do it and use it21:00
nmagnezijohnsom, for health manager list? (trying to recall)21:00
johnsomrm_work I know that is what was originally discussed21:00
johnsomUgh, meeting time is up....21:00
*** openstack changes topic to "Welcome to LBaaS / Octavia - Queens development is now open."21:00
openstackMeeting ended Wed Dec 20 21:00:54 2017 UTC.  Information about MeetBot at . (v 0.1.4)21:00
openstackMinutes (text):
*** Alex_Staf has quit IRC21:01
johnsomJust made it ..  ha21:01
nmagneziwell.. next year i guess..21:01
cgoncalveshard stop there :)21:01
openstackgerritAdam Harwell proposed openstack/octavia master: Switch to using PKCS12 for TLS Term certs
johnsomnmagnezi yeah, HM list was one need21:01
nmagnezijohnsom, what's the usecase here? in case an operator loads a new hm to an existing production env?21:02
xgerman_yep, or an ip changes21:02
johnsomYeah, all of the above.21:02
rm_workso, once we have that, we can solve this problem as well by just utilizing that call21:03
rm_workand sending it the correct IP to bind21:03
nmagnezibar_, ^^ what do you think?21:03
bar_nmagnezi, need to understand better how it is implemented21:04
rm_workwe're literally polling to connect to the VM as it comes up, so the time it'd be listening on is super low, and we have other security measures (client-cert-auth) that should already be fairly effective21:04
cgoncalvesIMO rm_work's proposal addresses not only the problem at hand but also enables use cases21:05
cgoncalvesif I understood it correctly, that is21:05
rm_worknot to mention that if someone did manage to beat us to hitting the rebind call, AND the managed to bypass our cert security ... we'd be unable to connect and quickly fail the VM, and it'd never go into service21:05
nmagnezijohnsom, btw re: points to which says.. we deprecate n-lbaas in Feb 2018 ?21:05
openstackLaunchpad bug 1737567 in OpenStack Heat "Direct support for Octavia LBaaS API" [Medium,New] - Assigned to Rabi Mishra (rabi)21:05
rm_workcgoncalves: that's the idea21:05
cgoncalvesrm_work: how is that being pooled btw?21:05
nmagnezijohnsom, wanted to raise this in the open discussion but we ran out of time..21:06
cgoncalvesoops yes :)21:06
johnsomnmagenzi I think we also wanted to turn on debug, change the health heartbeat intervals, rotate the heartbeat key, etc.21:06
rm_workcgoncalves: so when we spin a new Amphora, we sit in a loop trying to connect to it to make sure it's up and healthy, and i believe we do some initial config with it21:06
rm_workif we never get a response (within configured timeout period), we assume the create failed (usually because nova broke or our image was bad or network isn't working)21:07
johnsomnmagnezi We have not declared a deprecation for neutron-lbaas anywhere....  I don't see it on the governance site...21:07
rm_workany of those cause us to just throw away that amphora as a failure21:07
nmagnezijohnsom, neutron-lbaas ==> assert:follows-standard-deprecation21:08
nmagnezijohnsom, i'm getting this wrong maybe21:08
*** Alex_staf has joined #openstack-lbaas21:08
cgoncalvesrm_work: k. I'm wondering if it would be better to listen on the message bus for nova to tell the amp is up and then poll for the service21:08
nmagnezijohnsom, he just wrote it in the bug "..and will deprecate the current neutron-lbaas extension starting Queens release (February 2018)"21:08
johnsomnmagnezi Yes, we assert that we will follow standard deprecation policy, not that it is yet deprecated....21:08
cgoncalvesrm_work: it would also be faster to react in case nova reports it failed to boot21:09
rm_worknova tells us the amp is ACTIVE21:09
rm_workbefore we start to poll21:09
rm_workbut from nova considering the amp is "up" to when our agent is actually listening ... is not clear21:09
rm_workyes, if nova goes to ERROR or something we fail at that point21:09
johnsomcgoncalves Nova claims it's up long before the kernel is even booted....21:09
nmagnezijohnsom, oh, alright. thanks for that answer.21:09
rm_workthe polling happens after Nova already says it's done21:09
bar_rm_work, given the security measures already applied on communication with the amphorae, is your solution significantly better than leaving the agent on
rm_workbar_: honestly I don't really see a huge need to "fix" this21:09
cgoncalvesrm_work: ah ok, so we wait for the ACTIVE signal21:10
rm_workbut I can understand just not wanting to have extra ports open on networks that are bad21:10
rm_workopening up to DDoS or something21:10
rm_workcgoncalves: yes21:10
johnsombar_ This bug is OLD.  It was in before we added the network namespace....21:10
nmagnezijohnsom, the agent runs on the root namespace21:10
cgoncalvesk, I have no knowledge how it is implemented in octavia sorry :)21:11
rm_work.... nothing else is plugged there21:11
rm_workthe danger was that the member networks used to be plugged there as well21:11
rm_workand we didn't: A) want to take up a port on the VIP network; B) open our API to ddos on the VIP/member networks21:11
johnsomcgoncalves No worries.  Just noting that nova claims it is booted when it starts the HV process, not when anything is actually accessible/running.21:12
rm_worknow we're alone on the mgmt-net21:12
rm_worki personally think running on is not really a risk anymore21:12
rm_worksince the only thing that's typically plugged in that namespace is the mgmt-net21:12
johnsomReally this is just down to checking a security box that we don't listen on
bar_rm_work, does my patch fail amphora boot for you?21:13
rm_workso, the other option that was proposed (getting the configured IP from cloud-init and setting our config from that) is also interesting, if someone has some idea how to do that21:13
rm_workbar_: yes, it will21:13
rm_workwe cannot create ports ahead of VM creation21:14
rm_workbecause we cannot specify a mgmt-net21:14
johnsomI know how to do that, roughly21:14
rm_workit has to be auto-assigned to us via the nova scheduler21:14
bar_rm_work, can you help me insert a flag that will skip that allocation in your deployment? is that something you would consider implementing?21:14
rm_workI still don't see the point of complicating the flows with yet more port creation/plugging/cleanup21:15
rm_workwe already have enough of that21:15
bar_I hear you21:15
rm_workI think if johnsom knows how to use cloud-init to do it, that would probably work fine21:15
rm_workcurrently we juggle so many objects manually that need to be tracked and cleaned up that I am in a constant state of paranoia about how much orphaned stuff is floating around21:16
rm_worksecurity groups, ports, vms21:16
rm_workI've seen all of these be orphaned in my environment for various reasons T_T21:17
johnsomYeah, I really like leaning on nova to create that port.  It's better for bare metal too.21:18
rm_workI'm working on a set of scripts to scan and link all objects we create so I can track what is possibly orphaned21:18
rm_workprobably when I'm done I'll throw it in contrib/21:18
bar_Can nova alter the agent_configuration file?21:18
johnsomNo, but cloud-init could if we wanted.  Or the agent can pick up cloud init data and update the config file21:19
xgerman_then there is the infmaous metadata service21:20
johnsomWhich is what cloud-init would use if we didn't disable it21:20
rm_workyeah I guess we could just update the agent to look for cloud-init stuff on boot21:20
rm_work*on start21:20
xgerman_or just the init script21:21
rm_workand update its config before it even runs21:21
xgerman_for the service21:21
*** aojea has joined #openstack-lbaas21:21
rm_workupdate config from cloud-init21:21
rm_workthat seems doable21:21
bar_I'm not convinced it will have much value, I must admit. There is not one that argues that it is an actual vulnerability, other than the root argument, which perhaps should be taken look of independently.21:24
bar_If we're satisfied with the current security measures, than it doesn't make sense to invest time in cloud-init solution, am I wrong?21:26
rm_workYeah I'm not sure it's worth very much time21:26
bar_nmagnezi, are you still here?21:26
rm_workjust that people have a tendency to freak out when they see the and hardly anyone actually knows how the namespacing works21:27
rm_workI feel like the most common troubleshooting question here is "why don't I see my vip or member networks plugged, did something go wrong?"21:27
rm_workanywho, i'm on vacation today so I'm gonna go back to doing that, have fun ya'll :P21:28
bar_rm_work, thanks for joining21:28
rm_workI'm technically not back until 2018 but I can pop in if i see pings21:29
openstackgerritMerged openstack/octavia master: Extend api to accept qos_policy_id
*** threestrands_ has joined #openstack-lbaas21:36
bar_johnsom, Is the qos client patch blocked due to versioning?21:39
johnsombar_ ummm21:40
johnsomGive me a minute to look at it21:41
openstackgerritAdam Harwell proposed openstack/octavia master: Switch to using PKCS12 for TLS Term certs
johnsombar_ Any idea what the error is if they run it against a non-qos API?21:43
bar_johnsom, never tried it.21:43
johnsomI think I can, just a minute21:43
johnsomOk, a few minutes, this stack doesn't have qos enabled...21:47
johnsomstack@devstackpy27-2:/etc/neutron/plugins/ml2$ openstack loadbalancer create --vip-qos-policy-id test --vip-subnet-id private-subnet --name lb121:51
johnsomUnknown attribute for argument load_balancer.loadbalancer: vip_qos_policy_id (HTTP 400) (Request-ID: req-0b708b91-1b38-4b64-bab9-3da6fe72d246)21:51
bar_is it good or bad?21:51
johnsombar_ I am fine with letting that through. It's not ideal, but at least is clear to the user what is wrong.21:52
bar_Did you try PUT?21:52
bar_openstack loadbalancer lb_id --vip_qos_policy_id qos_id21:53
bar_*loadbalancer set21:53
johnsomAssuming I have one booted.  glance is acting up and not acknowledging the tag21:53
*** longstaff has quit IRC21:56
johnsombar_ Yeah, same exact error21:57
*** aojea has quit IRC22:04
bar_johnsom, should we update the example local.conf, now that this feature is merged?22:10
xgerman_+ we need a gate22:10
bar_a gate??22:10
xgerman_yep, a jenkins gate where we have qos enabled and appropriate tempest tests22:11
bar_I see22:12
johnsomFrankly we should improve the error message when neutron doesn't have QoS enabled:22:13
johnsomstack@devstackpy27-2:~/project/testqosclient/python-octaviaclient$ openstack loadbalancer create --vip-qos-policy-id foo --vip-subnet-id private-subnet --name lb122:13
johnsomThe resource could not be found.22:13
johnsomNeutron server returns request_ids: ['req-70b3cf2f-c04d-4fef-a4a5-313fb45b64ae']22:13
johnsomThat is neutron's response that passes through our API.....22:14
bar_Note I already have a patch waiting for review about NeutronException, which was dependent on the QoS feature.22:15
johnsombar_ I think this is client side...22:16
*** kpalan1 has quit IRC22:16
johnsomBut I am aware of your patch too22:16
bar_johnsom, oh, i see. yeah, my patch is server side...22:17
johnsomThough I think this might just be an overall issue with OSC....22:18
johnsom Line 15722:18
johnsomI think it's trying to go out and verify the policy ID and that is all that it gets back....22:19
bar_Is there a foo qos_policy?22:20
johnsomNo because qos isn't enabled22:21
bar_You would like the client to mask that in a way? Or verify qos is enabled?22:21
johnsomWell, I don't know if it's *our* bug or a bug in client_manager.neutronclient.list_qos_policies22:22
openstackgerritMerged openstack/python-octaviaclient master: Updated from global requirements
johnsomI'm just thinking from an end-user, this doesn't tell me that QoS is disabled in neutron.22:22
johnsomIt implies that I just have the wrong ID22:23
johnsomstack@devstackpy27-2:~/project/testqosclient/python-octaviaclient$ openstack network qos policy create test22:23
johnsomNotFoundException: Not Found (HTTP 404) (Request-ID: req-8ff562d1-4f32-48c3-a9da-f83f74731e60), The resource could not be found.22:23
johnsomSame not very useful error when you try to create a policy22:23
bar_Do we have similar run-time resource-availability verifications in octavia code?22:25
bar_...similar to what you imply is required.22:25
bar_Can you direct me to such check-point?22:26
*** rcernin has joined #openstack-lbaas22:32
*** aojea has joined #openstack-lbaas22:33
bar_johnsom, do you want me to implement a similar checkpoint in the client?22:34
johnsombar_ Hmm, I'm really on the fence.  It will just slow down the happy-path.  Plus, will the commands still show up in --help?22:35
*** jappleii__ has joined #openstack-lbaas22:35
johnsomI guess it's a "nice to have for user experience" kind of thing22:35
johnsomReally I think the neutron client should return a nicer message when an extension is missing.22:36
*** threestrands_ has quit IRC22:36
bar_and/or... openstackclient should also implement similar checkpoints22:37
johnsomYeah, so, up to you. I'm not going to block your patch over it.  If you are motivated I will support you in it22:37
*** Alex_staf has quit IRC22:37
bar_johnsom, I will not pursue it at the moment. not as part of this patch at least.22:38
*** AlexStaf has joined #openstack-lbaas22:38
johnsomOk, good plan22:38
johnsomI think I can quickly test and +2 anyway22:38
bar_johnsom, wo-ho!22:39
bar_johnsom, did you read my private message?22:41
*** mixos has joined #openstack-lbaas22:41
johnsomdidn't get one, try again?22:41
bar_sent you several22:42
johnsomAre you registered and logged into freenode?22:42
bar_probably not22:42
johnsomYeah, they changed the security over the weekend to require you to be logged in22:42
johnsomThere was a bunch of bad spam happening22:43
bar_I see22:43
bar_how do "log in"?22:43
AlexStafYeah there was wierd stuff22:43
bar_*how do I22:43
cgoncalvesbar_: /nickserv help register22:44
johnsomThat page has the details22:44
bar_cgoncalves, johnsom, thx22:44
cgoncalves(uh, oh! I'm registered for almost 11 years now :D)22:45
johnsomI let mine drop for a long time then set it up again for OpenStack work22:46
*** AlexStaf has left #openstack-lbaas22:47
cgoncalvesjohnsom: could you please give some +2 love? :)22:49
johnsomOk, will take a look after I'm done with this qos client22:50
cgoncalvessure. it's not urgent for me anyway. thanks22:50
*** mixos has quit IRC22:56
*** belharar_ has joined #openstack-lbaas22:58
*** bar_ has quit IRC22:58
*** belharar_ has quit IRC22:58
*** krypto has quit IRC22:58
*** bar_ has joined #openstack-lbaas22:58
*** aojea has quit IRC23:06
openstackgerritMichael Johnson proposed openstack/octavia-tempest-plugin master: Update README
openstackgerritMichael Johnson proposed openstack/octavia master: ACTIVE-ACTIVE: Initial distributor data model
openstackgerritBar RH proposed openstack/octavia-tempest-plugin master: Update README
johnsomOpps, thanks23:29
bar_johnsom, the rest of them are broken too I'm afraid23:31
bar_I'll fix that..23:31
openstackgerritBar RH proposed openstack/octavia-tempest-plugin master: Update README
openstackgerritBar RH proposed openstack/octavia-tempest-plugin master: Add missing file
bar_johnsom, ^23:54
johnsomOk, watching that23:55

Generated by 2.15.3 by Marius Gedminas - find it at!