20:00:14 <johnsom> #startmeeting Octavia
20:00:15 <openstack> Meeting started Wed Jun 29 20:00:14 2016 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:16 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:18 <openstack> The meeting name has been set to 'octavia'
20:00:25 <dougwig> o/
20:00:25 <johnsom> Hi folks
20:00:25 <sbalukoff> Howdy, folks!
20:00:26 <rm_work> o/
20:00:30 <Guest51388> Hi
20:00:33 <TrevorV> o/
20:00:33 <Frito> o/
20:00:35 <eranra> Hi
20:00:39 <perelman> hi
20:00:44 <Frito> meeting start?
20:00:51 <eezhova_> hi
20:01:03 <johnsom> Yes, I started the meeting
20:01:08 <johnsom> #topic Announcements
20:01:21 <johnsom> Octavia / Neutron-LBaaS mid-cycle
20:01:32 <johnsom> #link https://etherpad.openstack.org/p/lbaas-octavia-newton-midcycle
20:02:17 <johnsom> Also, I have added lbaas-merge RFE bugs with the tag lbaas-merge in launchpad
20:02:17 <nmagnezi> hello everyone
20:02:24 <sbalukoff> Nice!
20:02:27 <johnsom> #link https://bugs.launchpad.net/octavia/+bugs?field.tag=lbaas-merge
20:02:40 <johnsom> So we can start tracking those.  Please update/expand as needed
20:02:51 <sbalukoff> Sounds good!
20:03:15 <johnsom> Last announcement I have, I am planning to put in our usual LBaaS/Octavia talk for the summit.
20:03:27 <fnaval> o/
20:03:34 <johnsom> I think we should cover merge topics (give awareness).
20:03:46 <dougwig> oh yes.
20:03:51 <nmagnezi> o/
20:03:53 <sbalukoff> johnsom: I would like to be involved in that talk, eh.
20:04:00 <johnsom> If you have any other topics you would like to have some time during the talk, please send me topic ideas
20:04:09 <sbalukoff> Assuming you can stomach sharing a stage with me again. ;)
20:04:16 <dougwig> sbalukoff: heh, and next you're going to state that the world is round.
20:04:23 <johnsom> sbalukoff Are you guys going to be in a place to do a A/A demo?
20:04:56 <sbalukoff> johnsom: No idea at this point. :/
20:05:15 <dougwig> going back to the midcycle announcement, please update that etherpad if you're going; there are a lot of tentatives still on the list.
20:05:30 <johnsom> Ok.  Well, please send me an e-mail or IRC with your topics so I can get a list of participants/topics for the talk
20:05:37 <sbalukoff> Ok!
20:05:59 <johnsom> Any other announcements today?
20:06:33 <johnsom> #topic Brief progress reports / bugs needing review
20:06:58 <johnsom> I have been working on cleaning up our bugs list and getting the merge tasks in so we can track them.
20:07:15 <johnsom> I should be back in the coding game this week, so that is going to be nice.
20:07:16 <fnaval> i too have been working on bugs
20:07:27 <Frito> sry, back, disconnected :-(
20:07:29 <TrevorV> Been mostly focused on internal reviews.  Will go in and update for my health monitor change in neutron lbaas shortly so we can get that merged soon
20:07:32 <fnaval> verifying and closing out where necesseary
20:07:34 <sbalukoff> Oh-- apparently there's a new IBM team out of China who are going to be working on LBaaS / Octavia. I couldn't tell from the e-mail I got whether they intend to do a lot of upstream work. Obviously, I'm pushing hard for that to be the case.
20:07:40 * blogan sneaks in late
20:08:02 <fnaval> reviews needed on these - should be quick
20:08:03 <fnaval> #link https://review.openstack.org/#/c/309635/
20:08:09 <fnaval> #link https://review.openstack.org/#/c/308091/
20:08:10 <sbalukoff> I'm also seeing a light at the end of the tunnel regarding the internal work I'm doing. In a week or two I expect to have more time to work on upstream again.
20:08:17 <eezhova_> The functional job for lbaas is ready and passing, please review
20:08:23 <eezhova_> #link https://review.openstack.org/320999
20:08:26 <sbalukoff> Though next week I will be out, as I'm taking some vacation after the 4th.
20:08:52 <johnsom> cool, thanks eezhova_!
20:08:58 <nmagnezi> reviews also needed on those two:
20:08:59 <nmagnezi> https://review.openstack.org/#/c/299998/
20:08:59 <fnaval> also if anyone has any spare cycles and would like to take over the TempestPlugin for Neutron LbaaS
20:09:00 <fnaval> #link https://review.openstack.org/#/c/321087/
20:09:04 <johnsom> And fnaval
20:09:07 <nmagnezi> https://review.openstack.org/#/c/327966/
20:09:10 <eezhova_> #link https://review.openstack.org/#/c/329507/
20:09:22 <eezhova_> #link https://review.openstack.org/#/c/332913/
20:09:33 <eezhova_> #link https://review.openstack.org/#/c/334374/
20:09:56 <sbalukoff> Sweet.
20:10:07 <johnsom> Nice.  I like the activity.
20:10:12 <eezhova_> these three are quite ready to merge :)
20:10:24 <johnsom> We also have someone updating our DIB elements, so good stuff there too
20:10:38 <sbalukoff> Yeah, that's great!
20:11:15 <johnsom> Ok, on to our other topics today
20:11:29 <johnsom> #topic Amphora-agent supporting multiple Linux flavors (nmagnezi)
20:11:40 <johnsom> nmagnezi you have the floor
20:11:47 <nmagnezi> hi guys
20:11:51 <johnsom> #link http://lists.openstack.org/pipermail/openstack-dev/2016-June/098464.html
20:12:11 <nmagnezi> yup. so the issue is summed up in this mail
20:12:24 <nmagnezi> i basically have two possible solutions for this
20:12:50 <dougwig> i'm fine with the idea as long as someone on the team is willing to maintain it. adding work for another company, in a vacuum, makes no sense to me.
20:13:24 <nmagnezi> dougwig, by "adding work" what do you mean exactly
20:13:31 <sbalukoff> dougwig: +1
20:13:44 <nmagnezi> dougwig, i intend to work on the patch. can you please tell me what more is needed?
20:13:49 <johnsom> Yeah, I agree with dougwig, someone has to do the work and maintain it.  That said, I think there is value in doing a systemd spin as it seems to be the path forward for a number of these distros
20:13:56 <dougwig> nmagnezi: your proposal adds code, potential bugs, maintenance, doubles test matrix, etc.  it's not free after the code is submitted.
20:14:39 <dougwig> nmagnezi: i understand redhat doesn't want to ship an ubuntu image. makes sense. but it's not free, and we have to realize and account for that.
20:14:44 <nmagnezi> dougwig, agreed. and as johnsom said supporting systemd is valuable for more than just "vendor x" :)
20:14:59 <blogan> its teh future!
20:15:07 <nmagnezi> dougwig, yup. i understand
20:15:09 <dougwig> i think xenial is systemd, so we're stuck with that steaming pile of overreach, regardless of distro.
20:15:17 <sbalukoff> nmagnezi: Right. well, the idea here is that "vendor X" should probably be maintaining it. ;)
20:15:29 <nmagnezi> sbalukoff, :)
20:15:31 <blogan> dougwig: just wait until the next "better" one comes around
20:15:32 <sbalukoff> dougwig: Haha!
20:15:35 <johnsom> dougwig Tell us how you really feel about systemd
20:15:35 <nmagnezi> but wait
20:15:54 <nmagnezi> what do you guys think about the "do it like lbaas and l3 agent" approach?
20:16:20 <sbalukoff> It sounds like that ought to simplify start-up woes.
20:16:39 <sbalukoff> And make for less distro-dependent code. But I could be wrong, I suppose.
20:16:53 <johnsom> Well, I think we have value in the way upstart is setup today (not just because I had a part in that), where the processes can start and restart themselves in dependent
20:17:20 <dougwig> however it may make time to the vip passing traffic slower, as it'd go from native NIC and haproxy to needing the agent to start, and then settle, and then plug.
20:17:28 <sbalukoff> Yep, it's a lot more work to build these components into a daemon that upstart is already doing for us.
20:17:50 <johnsom> If we went down the agent path, we would need to implement process monitoring for the other parts (haproxy, keepalived, etc.) which upstart provides us today
20:18:01 <sbalukoff> Yep.
20:18:14 <dougwig> can we just have systemd be the load balancer, and respond to our REST requests?
20:18:19 <nmagnezi> johnsom, like in https://review.openstack.org/#/c/327966/ ?;)
20:18:28 <nmagnezi> dougwig, lol
20:19:02 <johnsom> dougwig, sure, anything is possible (as he shakes his head towards dougwig)
20:19:07 <nmagnezi> johnsom, imo the agent path will eventually be more future proof
20:19:22 <dougwig> i'm just keeping with the design principles of systemd, y'all.  don't shoot the messenger.
20:19:30 <sbalukoff> Hahaha
20:19:40 <nmagnezi> johnsom, plus it has already proven to work well for two other agents
20:19:41 <johnsom> nmagnezi True. It also makes sequencing very clear.
20:20:00 <dougwig> nmagnezi: i'd like to see the boot time different between ifup/haproxy versus agent+same.
20:20:07 <dougwig> /different/difference/
20:20:33 <nmagnezi> dougwig, that's fair.
20:20:53 <dougwig> ideally we'd have a container amphora that was responding to traffic in 2-3 seconds; add an agent on top is significant in that kind of scenario.
20:21:13 <sbalukoff> dougwig: +1
20:21:20 <sbalukoff> That's the future I want to see.
20:21:45 <nmagnezi> will the containers use the same python agent?
20:21:46 <johnsom> I agree, but we aren't dealing with the service VM amp at that point either
20:22:16 <sbalukoff> nmagnezi: Probably initially.
20:22:39 <sbalukoff> nmagnezi: Though, of course, especially interface manipulations don't always translate 1:1 between VMs and containers.
20:22:40 <dougwig> johnsom: like we won't just copy the framework over?  c'mon, devs are lazy.  i mean, efficient.
20:22:44 <rm_work> dougwig: i think emacs already has a systemd plugin to do that
20:22:53 <johnsom> Haha
20:23:01 <nmagnezi> hah
20:23:03 <sbalukoff> rm_work: HAHA!
20:23:14 <sbalukoff> emacs has everything you could possibly want except a good text editor.
20:23:17 <johnsom> Oh, emacs, you had to go there
20:23:26 <rm_work> yep, HAD to.
20:23:37 <rm_work> anyway, I like systemd better than upstart personally >_>
20:23:41 <dougwig> you might need three foot pedals and a head-banging stick to do all the combos quickly, but at least the arrow keys work without switching modes.
20:23:46 <rm_work> so i may be not one to talk
20:23:58 <johnsom> Ok, so I would propose going down the systemd path and not folding this all into the agent.
20:24:13 <dougwig> +1 from me, personally.
20:24:15 <nmagnezi> okay so I just wanted to know that if I attempt to implement "the agent path" it will be something you will be willing to eventually merge
20:24:21 <sbalukoff> Makes sense, especially since it's viable in xenial, too.
20:24:26 <nmagnezi> aye. glad to hear that
20:24:28 <rm_work> as proven by me submitting systemd fixes and having my CR blocked because i refuse to touch upstart >_>
20:24:38 <nmagnezi> btw that means changes to the way things are done with Ubuntu as well
20:24:52 <nmagnezi> aside from how the agent itself is started on boot
20:25:03 <dougwig> nmagnezi: i think you're reading johnsom's suggestion and the +1's backwards.  :)
20:25:15 <sbalukoff> Eh...  We'll probably want to move the 'default' amphora image beyond trusty at some point soon anyway.
20:25:15 <johnsom> nmagnezi Yes, that is fine as long as it works and meets the functionality
20:26:04 <nmagnezi> dougwig, i stand corrected.
20:26:14 <johnsom> sbalukoff Yes.  I am slightly worried that DIB might do that TO us at some point.  Versioning isn't clear in the elements IMHO
20:26:24 <sbalukoff> Yep.
20:27:03 <nmagnezi> johnsom, dougwig, to conclude you guys are against the agent path and want me to replicate the use of sysvinit with systemd?
20:27:05 <sbalukoff> That, and I don't want to have to back-port trusty fixes for the next three years.
20:27:11 <dougwig> based on how long it'll take infra to switch over, i expect we have until sometime in the fall.
20:27:27 <johnsom> Ok, I think we are aligning around the systemd approach.  nmagnezi does that cover your topic?
20:27:52 <nmagnezi> johnsom, yes.
20:28:05 <johnsom> Excellent.  thank you for the topic!
20:28:25 <johnsom> #topic Should amphorae be rebootable?
20:28:39 <johnsom> #link https://bugs.launchpad.net/octavia/+bug/1517290
20:28:40 <openstack> Launchpad bug 1517290 in octavia "Not able to ssh to amphora or curl the vip after rebooting" [High,Opinion]
20:28:59 <johnsom> I did not get a chance to test this to confirm if it is still an issue.  Anyone else?
20:29:18 <johnsom> Otherwise, you can slap my wrist and I will try again this coming week.
20:29:28 <dougwig> i still think the answer should be yes, but i believe this horse was beaten into glue.
20:29:37 <sbalukoff> I was almost entirely on internal stuff this last week. Didn't have time to review.
20:30:17 <blogan> how far down the making the cattle into pets logic do we go?
20:30:24 <johnsom> I kind of a agree, rebootable is a good thing.  I want to test if this ssh issue still exists or not.  I think not, so we could kick that can
20:30:25 <blogan> i like pet cows
20:30:36 <blogan> they're delicious
20:30:39 <sbalukoff> Haha
20:30:41 <johnsom> Haha
20:30:58 <johnsom> #topic Load balancer upgrades (ihrachys)
20:31:01 <dougwig> my cat is 20. i can clearly never let go.
20:31:12 <johnsom> #link http://lists.openstack.org/pipermail/openstack-dev/2016-June/098471.html
20:31:19 <blogan> dougwig: how many times have you rebooted your cat?
20:31:23 <TrevorV> 5
20:31:27 <johnsom> I know he can't make it to the meeting, but he said he will read the notes.
20:31:28 <TrevorV> I heard him talk about it
20:31:52 <rm_work> yeah cattle may not be pets, but you still immunize them and put them inside during storms and such, right? >_>
20:32:03 <rm_work> because it'd be bad if your herd all froze and died
20:32:17 <sbalukoff> Oh, that's a long e-mail!
20:32:17 <johnsom> I think the one issue that comes to mind for me is the sticky table sync would be lost
20:32:18 <rm_work> to extend the metaphor a bit <_<
20:32:20 <dougwig> ihar's suggestion was to use fip's as a zero-downtime upgrade mechanism.  it'd work, except for the many deployments that don't have fip's.
20:32:29 <blogan> i got through half of ihar's email and then got pulled away
20:32:36 <johnsom> I still wonder about outage time with the FIP update as well
20:32:44 <rm_work> oh lol i'm on like a 2 minute delay because i'm scrolled up slightly >_<
20:32:44 <blogan> oh but fips are defocre required soon!
20:33:10 <sbalukoff> defacto?
20:33:15 <dougwig> my cloud doesn't use fip's. it's uses provider nets. i don't mind an optional mechanism on top of FIPs, but i'd hate to see something requiring them (and thus requiring NAT as well.)
20:33:25 <dougwig> i think he meant defcore.
20:33:27 <sbalukoff> I hate FIPs. They're not IPv6 friendly. :/
20:33:28 <blogan> dougwig: your cloud won't be an openstack cloud then, sorry you lose
20:33:30 <dougwig> not defcore optional?  wow, suck.
20:33:47 <johnsom> I am not a fan of FIPs either
20:33:55 <blogan> it'll be a year until that becomes required
20:34:00 <dougwig> neutron's deployment advice is heading away from fip's.  and now we're requiring them?  when provider nets don't support them?  what a mess.
20:34:12 <sbalukoff> blogan: So a year to fight and delay that for another year...
20:34:15 <blogan> its apparently the only way defcore sees clouds should give out external IPs
20:34:34 <sbalukoff> Yeah, that's stupid.
20:34:42 <dougwig> FIPs should be available and used exceptionally, not the normal case.  IMO.
20:34:57 <sbalukoff> Because, again, can we please join the 2000's and *think* about IPv6 already?
20:34:58 <blogan> well i've successfully gotten us off topic, back to ihar's suggestion
20:35:16 <dougwig> sadly, whether we rely on FIPs goes to the heart of the suggestion.
20:35:30 <blogan> yeah true
20:35:31 <sbalukoff> I need to read through that e-mail.
20:35:34 <johnsom> sbalukoff IPv6 is broken in octavia btw.
20:35:38 <dougwig> as i was reading it, i was wondering if a port/vif swap couldn't achieve something similar?
20:35:42 <sbalukoff> johnsom: Quiet, you!
20:35:42 <blogan> sounds like a pluggable approach
20:36:02 <johnsom> It should be an easy fix
20:36:04 <sbalukoff> johnsom: Hopefully there's a bug open about this?
20:36:05 <blogan> dougwig: i was thinking of the same thing, kind of what we do on failover
20:36:09 <TrevorV> I don't think fips as the normal case is the right way either.  After mostly-skimming the email
20:36:10 <blogan> which is kind of what he was saying wouldnt' work
20:36:20 <johnsom> Ok, so let's take some time to read through and reply to the e-mail.
20:36:32 <sbalukoff> dougwig: I'd prefer the port/vif swap if that's viable.
20:36:47 <sbalukoff> Ok.... so create an action item?
20:36:58 <johnsom> Sure
20:37:03 <blogan> sbalukoff: tahts what failover does currently
20:37:14 <sbalukoff> Everyone go to the mailing list and bitch about how FIPs suck. Also, feedback on Ihar's suggestion.
20:37:19 <blogan> but yeah i guess everyone should read the email
20:37:27 <johnsom> #action everyone read over Ihar's e-mail about amp upgrades and reply to the e-mail
20:37:55 <johnsom> #topic Open Discussion
20:37:56 <dougwig> let's get systemd to assign the FIPs, and make an audio representation with pulseaudio.  then we can all die in a vat of mediocrity.
20:38:07 <evgenyf> Hi guys, I have a patch that somehow breaks Octavia driver here #link https://review.openstack.org/#/c/326018, could you please have a look?
20:38:10 <sbalukoff> dougwig: Haha
20:38:29 <evgenyf> #link https://review.openstack.org/#/c/326018
20:38:40 <sbalukoff> evgenyf: I'll try to get to that this week.
20:38:51 <dougwig> evgenyf: should we be amused that your own CI failed on it?
20:38:58 <sbalukoff> Heh!
20:39:06 <evgenyf> sbalukoff: thanks
20:39:49 <sbalukoff> Anything more on Ihar's suggestion?
20:39:55 <evgenyf> dougwig: our CI is broken , had no chance to fix it, hope to do it soon
20:40:20 <sbalukoff> Oh wait, sorry, didn't see the topic change.
20:40:50 <sbalukoff> Er... anyone have anything else they'd like to discuss at this time?
20:40:52 <johnsom> Yep, open discussion time
20:41:07 <nmagnezi> haproxy (with lbaas agent) related patches up for review:
20:41:09 <johnsom> sbalukoff Any update on the docs work?
20:41:09 <nmagnezi> #link https://review.openstack.org/#/c/327966/
20:41:16 <nmagnezi> #link https://review.openstack.org/#/c/299998/
20:41:22 <johnsom> I think we only saw one patch so far
20:41:30 <sbalukoff> johnsom: No progress in the last couple weeks. Been on internal stuff (and so has the docs team)
20:41:32 <evgenyf> Yes,  some patch for neutron flavors plugin
20:41:35 <blogan> oh btw i stand corrected about defcore requiring fips, it got pulled out
20:41:38 <evgenyf> #link https://review.openstack.org/#/c/318153
20:41:44 <blogan> got everyone all riled up for nothing
20:42:05 <rm_work> that's half the fun
20:42:14 <johnsom> blogan that figures....
20:42:16 <blogan> routers are still in though
20:42:20 <sbalukoff> johnsom: well, the good news is I'm getting a lot of experience on deploying Octavia in a non-devstack environment, meaning that doc at least ought to be fairly fleshed out. :)
20:42:26 * blogan hides in his hobbit hole
20:42:35 <johnsom> sbalukoff Awesome!
20:42:40 <TrevorV> blogan you know damn well you're too big for one of those
20:42:53 <dougwig> blogan: did systemd get ripped out in favor of init, too?  if so, i'll end today happy.
20:43:07 <sbalukoff> johnsom: But yes, once this internal stuff is done, I intend to spend some time on that, and then having docs people fix my broken English.
20:43:13 <blogan> dougwig: in some distros they never went to systemd, so you can be happy
20:43:22 <johnsom> Hahaha.  I will help too
20:43:23 <sbalukoff> And also herding that new IBM China team into doing a lot of upstream work, if I can. :)
20:43:41 <blogan> TrevorV: my species is Hobbitus Gigantis
20:44:00 <dougwig> at some point there will be a big pile of work in taking the neutron-lbaas tests and running them against the octavia api.  unless we've already duped them all.
20:44:12 <sbalukoff> We haven't.
20:44:20 <johnsom> Right
20:44:21 <dougwig> if you're looking for a big chunk of upstream that needs loooking at, sbalukoff
20:44:34 <sbalukoff> dougwig: Right!
20:44:36 <blogan> dougwig: if we had the passthrough or proxy thing working then we'll buy some time on that
20:44:58 <sbalukoff> I'm fairly certain we'll need the passthrough proxy in any case.
20:45:09 <sbalukoff> Because LBaaSv2 and Octavia don't *exactly* line up.
20:45:13 <dougwig> blogan: you mean make octavia jobs that just use the neutron shim?  yes, but then we'd have pretty limited coverage on the far half of the "just switch endpoints" message.
20:45:24 <sbalukoff> Though they are damned close.
20:45:29 <blogan> sbalukoff: no, but one of the efforts is to get tehir APIs to line up
20:45:36 <sbalukoff> Yep, I know.
20:46:00 <dougwig> it'd be funny if we made the neutron api shim be haproxy...  in an amp!
20:46:07 <sbalukoff> HAHA
20:46:12 <johnsom> #link https://bugs.launchpad.net/octavia/+bug/1596639
20:46:12 <openstack> Launchpad bug 1596639 in octavia "Align Octavia API to Neutron-LBaaS APIs" [High,New]
20:46:39 <blogan> dougwig: true, in that case, it'd just be a matter of switching the tests to use a different service in teh catalog
20:46:53 <nmagnezi> i have a question about octavia gates. how will a patch that changes the amphora agent get tested? isn't the amphora image deployed with octavia deployed from pypi? (meaning master code)
20:46:59 <blogan> once octavia accepts true nlbaas calls
20:47:24 <dougwig> nmagnezi: no, devstack installs it from zuul, so via your git change.  and the image is built on the fly.
20:47:24 <rm_work> nmagnezi: no, we build it -- though there is a bug i think still where it builds from master and not the patchset specified?
20:47:38 <rm_work> ^^ did we fix the "from master" issue?
20:47:52 <rm_work> i was looking into it at some point and had some code that did that...
20:47:55 <johnsom> rm_work I was just trying to remember if we got that fixed or not
20:47:56 <dougwig> rm_work: really?  why aren't we using the octavia that's already *on disk* ?
20:48:05 <rm_work> dougwig: because we just aren't
20:48:06 <rm_work> sec
20:48:08 <rm_work> let me find my fix
20:48:35 <blogan> there's a file in tree that determines what version of the octavia code the amp image should have
20:48:47 <blogan> and since its in tree, its always pulling master
20:48:48 <rm_work> is there?
20:48:50 <rm_work> ah yes
20:48:56 <rm_work> https://review.openstack.org/#/c/317756/2/elements/amphora-agent-ubuntu/extra-data.d/50-copy_local_source_files
20:48:58 <rm_work> so
20:49:00 <rm_work> this "works"
20:49:08 <rm_work> but it was part of another effort that i abandoned
20:49:10 <rm_work> we could pull it out
20:49:25 <blogan> https://github.com/openstack/octavia/blob/master/elements/amphora-agent/source-repository-amphora-agent
20:49:38 <johnsom> There is a DIB override that should be used
20:49:43 <rm_work> yeah see this CR and ignore like half of it: https://review.openstack.org/#/c/317756/2
20:49:47 <dougwig> blogan, that git url breaks infra's CI rules, btw.
20:50:09 <blogan> dougwig: well thats good to know, a year later!
20:50:17 <sbalukoff> Heh!
20:50:23 <johnsom> Someone just put a patch for that
20:50:51 <rm_work> but short answer is -- i fixed it, but then dropped that effort instead of getting it merged :/
20:51:14 <blogan> rm_work: dropped it bc it wasn't working out well? or time?
20:51:25 <rm_work> it was just a tiny piece of the amp-minifying effort
20:51:28 <rm_work> and i got retasked
20:51:44 <rm_work> THAT small part worked fine
20:51:52 <nmagnezi> dougwig, rm_work, thanks for the answer guys. yet i must admit that I didn't get how exactly the patch that should be tested ends up in the amphora disk
20:51:54 <rm_work> i can pull it out and resubmit
20:52:17 <rm_work> nmagnezi: yeah so what ends up happening is the amp agent changes get tested the next time AFTER your patch merges ;p
20:52:22 <rm_work> so, basically hilarity ensues
20:52:23 <blogan> nmagnezi: it doesn't, rm_work was dong a patch that would fix that issue
20:52:31 <sbalukoff> Sure. Probably worthwhile looking at the patch johnsom was just talking about, too.
20:52:50 <nmagnezi> oh, so this is why i didn't get it :D
20:53:04 <sbalukoff> Yeah, that's... less than ideal.
20:53:08 <bana_k> I don't understand much with the DIB, this might be of some help https://review.openstack.org/#/c/331392/
20:53:13 <blogan> nmagnezi: i think what some have done is update that file i linked above to their git revew and push up a test change to see veryting works fine, and then change it back and push up their review.  an awful workflow
20:53:43 <nmagnezi> blogan, wow..
20:53:51 <johnsom> Yeah.  I can take this as an action too
20:54:00 <rm_work> i'll see if i can get this done ASAP
20:54:07 <rm_work> probably tomorrow, as i have internal fires today
20:54:08 <sbalukoff> Well, in our defense, there aren't that many projects that use VM images like this. :/
20:54:19 <sbalukoff> As part of the service delivery, I mean.
20:54:21 <rm_work> since i already have working code for it
20:54:29 <rm_work> unless someone else wants to take it over
20:54:35 <blogan> the ssh driver never had this problem :)
20:54:39 <rm_work> but, examples are all there :P
20:54:43 <rm_work> blogan: +2
20:54:47 <dougwig> blogan: +3
20:54:48 * blogan watches the sshexit
20:54:50 <sbalukoff> Haha
20:55:00 * rm_work votes Yes for Sshexit
20:55:12 <rm_work> wait, or is it the other way around
20:55:16 <rm_work> I don't even
20:55:19 <johnsom> blogan -5
20:55:56 <rm_work> I don't understand what Sshexit is, but it has a catchy name so i'll vote for it anyway
20:56:00 <Frito> frito +100?
20:56:09 <sbalukoff> What else can we do to sabotage our own project?
20:56:16 <sbalukoff> HAHA
20:56:18 <johnsom> Ok, rm_work are you going to take this issue?
20:56:18 <dougwig> if i vote for sshexit, do we fork the project and dump all REST?
20:56:33 <rm_work> johnsom: yeah i'll try to tackle that today
20:56:36 <rm_work> or tomorrow
20:56:38 <rm_work> maybe today
20:56:39 <blogan> the project is pluggable, why fork
20:56:39 <johnsom> Ok
20:56:43 <dougwig> amusingly i've been doing a lot of flask lately, and like it much more than pecan.
20:56:52 <rm_work> dougwig: yes flask is much better
20:56:53 <rm_work> but alas
20:56:55 <johnsom> Any other topics before dougwig proposes a move to ruby AGAIN?
20:57:01 <blogan> dougwig: try falcon
20:57:05 <rm_work> note we did the agent in Flask :P
20:57:07 <sbalukoff> dougwig: Why have an API anyway, let's just spawn some kind of load balancer from a neutron agent.
20:57:08 <dougwig> blogan: why fork? because i don't trust this core team, of course. they're shifty.
20:57:08 <rm_work> dougwig: ^^
20:57:28 <rm_work> yeah I don't trust those cores either. not one bit.
20:57:30 <dougwig> sbalukoff: let's just build the lb into DVR ports.
20:57:48 <sbalukoff> dougwig: Exactly! I mean, it's a network function anyway, right?
20:57:48 <rm_work> (welcome to Octavia Open Discussion)
20:57:51 <blogan> have we digressed enough?
20:57:56 <johnsom> The thing I don't like about flask is the funky port stuff they do.  If it restarts it abandons the TCP port in a bad way
20:57:59 <dougwig> blogan: can we vote on that?
20:58:10 <rm_work> johnsom: that's just if you're using their builtin server
20:58:18 <rm_work> johnsom: i run flask through gunicorn, it's just uswgi
20:58:24 <rm_work> *uwsgi
20:58:30 <blogan> ugwsi
20:58:36 <dougwig> so... you're saying then... it's all.... unicorns?
20:58:37 <rm_work> uwigw
20:58:38 <johnsom> Ah, good, it's a mess with what we have now
20:58:46 <sbalukoff> ...and rainbows.
20:58:46 <rm_work> johnsom: yeah werkzeug is shit
20:58:48 <johnsom> Ok, two minutes
20:58:57 <dougwig> (gunicorn is a cheap knockoff of the ruby version. i'll just leave that out there.)
20:59:05 <johnsom> Ah, right it's werk
20:59:05 <sbalukoff> Haha
20:59:08 * blogan rolls eyes
20:59:20 <johnsom> Ok.  thanks folks!
20:59:22 <sbalukoff> Ok, y'all! Thanks for the meeting, eh!
20:59:24 <rm_work> johnsom: actually that'd be a fix....
20:59:31 <rm_work> johnsom: run gunicorn inside the amp lol
20:59:32 <blogan> we're at time
20:59:35 <blogan> thankfully
20:59:40 <sbalukoff> HAHA
20:59:41 <Frito> lol
20:59:45 <rm_work> o/
20:59:48 <dougwig> ruby fork ftw!
20:59:50 <johnsom> #endmeeting