20:00:05 <johnsom> #startmeeting Octavia
20:00:05 <openstack> Meeting started Wed Feb 17 20:00:05 2016 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:06 <sbalukoff> Howdy, howdy!
20:00:08 <openstack> The meeting name has been set to 'octavia'
20:00:12 * mhayden stumbles in
20:00:13 <johnsom> Hi folks
20:00:20 <fnaval> hi
20:00:22 <blogan> hello
20:00:26 <blogan> \o/
20:00:27 <ajmiller> o/
20:00:29 <minwang2> o/
20:00:38 <johnsom> Let's roll as we have a long agenda today
20:00:39 <rm_work> o/
20:00:40 <johnsom> #topic Announcements
20:00:55 <johnsom> L7 stuff is under review
20:00:57 <johnsom> #link https://etherpad.openstack.org/p/lbaas-l7-todo-list
20:01:05 <blogan> and under assault
20:01:09 <johnsom> Please try it out and review the patches.
20:01:10 <sbalukoff> Haha
20:01:18 <madhu_ak> \o/
20:01:23 <dougwig> o/
20:01:24 <sbalukoff> Also, we need more attention on the Neutron-LBaaS side of things.
20:01:26 <johnsom> Well, yes, but that has happened to all of us, so I wasn't going to rub more salt.
20:01:38 <sbalukoff> We have a couple reviews there for shared pools that have sat without reviews all last week.
20:01:43 <blogan> i like salt
20:01:48 <johnsom> Today is the last day to vote for summit sessions
20:01:58 <johnsom> #link https://etherpad.openstack.org/p/Austin-LBaaS-talks
20:02:12 <TrevorV> o/
20:02:13 <Aish> o/
20:02:16 <johnsom> This link has the LBaaS related talks that I am aware of, add as needed
20:02:36 <bana_k> hi
20:02:40 <johnsom> #topic Mitaka blueprints/rfes/m-3 bugs for neutron-lbaas and octavia
20:02:58 <johnsom> dougwig Any items you want to hit before I bring up Octavia bugs?
20:03:09 <dougwig> johnsom: no, go for it
20:03:44 <johnsom> Ok, I went through the Octavia bugs yesterday and tagged the ones I think we should try to get done for Mitaka
20:03:46 <johnsom> #link https://bugs.launchpad.net/octavia/+bugs?field.tag=target-mitaka
20:03:52 <sbalukoff> Nice!
20:04:24 <johnsom> I'm open for comment on the list.  I'm extra open to people assigning these to themselves and working on them!
20:04:59 <sbalukoff> Ok, I'll start chewing through that list as I wait for feedback on the L7 stuff.
20:05:04 <ptoohill> I have a review out for 1489963 that i probably wont get to any time soon
20:05:22 <johnsom> I'll let folks have time to look at these and we can review next week what is open, comments, etc.  Or bring them up in the channel
20:05:23 <ptoohill> It requires restarting the service, which doesnt seem to work at all
20:05:54 <sbalukoff> johnsom: Sounds good.
20:06:08 <johnsom> Yeah, that service restart issue is on the list below as well.  xgerman is the current owner.
20:06:13 <johnsom> Would love an update
20:06:19 <ptoohill> :)
20:06:23 <xgerman> :-)
20:06:53 <johnsom> #link https://bugs.launchpad.net/octavia/+bug/1496628
20:06:53 <openstack> Launchpad bug 1496628 in octavia "Amphora agent reload fails with socket in use" [High,In progress] - Assigned to German Eichberger (german-eichberger)
20:07:03 <xgerman> yeah, need to find time to look.fix
20:07:25 <johnsom> Ok.  If you think you can work on it great, otherwise we should take you off as owner
20:07:34 <johnsom> #action xgerman to work on https://bugs.launchpad.net/octavia/+bug/1496628
20:08:01 <xgerman> I wouldn’t put myself into a critical path
20:08:07 <johnsom> #topic Tempest-Plugin for Octavia tempest tests
20:08:15 <johnsom> Madhu you have the floor
20:08:19 <ptoohill> if we figure that out then my review should start working. I added the config and code to bind to proper ip
20:08:25 <madhu_ak> would like to see tempest integration with octavia tempest tests. Currently in lbaas, it is hardcoding the test discovery for tempest tests, which I feel, it is not the right thing to do. It can be implemented easily as per: http://docs.openstack.org/developer/tempest/plugin.html
20:08:44 <madhu_ak> I can push a dummy patch in octavia for tempest plugin. Depending on the patch, I can create a job for the same without touching gate hooks.
20:09:12 <TrevorV> madhu_ak have you talked with fnaval about this at all?
20:09:18 <madhu_ak> fnaval: Is that sounds good to you?
20:09:48 <madhu_ak> nope TrevorV maybe we need to talk about this and keep going ?
20:10:03 <blogan> using the plugin sounds like a good idea overall
20:10:05 <fnaval> i've barely seen how the plugin works - do other projects use this as well
20:10:22 <fnaval> i'm worried that we may be the early adopters of it, and thus have issues with it
20:10:23 <blogan> i dont know if there are any issues with it though
20:10:36 <johnsom> We can spend a little time here if everyone is present.  I know we talked about this at the mid-cycle and we need to keep moving on tests for Octavia
20:10:48 <madhu_ak> yes. For VPN, it is in review. for FW, it is already implemented.
20:10:59 <dougwig> did the lbaas tests ever get converted to being a tempest plugin?
20:11:06 <sbalukoff> Agreed-- I am looking forward to having real scenario tests for Octavia. :)
20:11:12 <madhu_ak> sadly no
20:11:52 <fnaval> the neutron-lbaas tests have not become a plugin; I was unaware that is was a requirement
20:12:36 <johnsom> dougwig Is this a requirement or nice to have?  Is it something that is high priority?
20:12:46 <TrevorV> madhu_ak I think I'm missing something.  fnaval has a review with octavia tempest tests already, is there some reason those are insufficient?
20:12:48 <fnaval> we have scenario test in review that does not implement the plugin interface; but we can definitely look into that as a later refactor?
20:13:11 <madhu_ak> yep. the crent tempest tests for octavia will not be disturbed at all.
20:13:15 <madhu_ak> current*
20:13:31 <sbalukoff> Would using the plugin mean we have to copy less of the tempest tree into Octavia?
20:13:45 <sbalukoff> (ie. helps us avoid code that will surely become crufty in Octavia?)
20:13:51 <madhu_ak> it is just for runnign the tests as a part of gate job
20:14:46 <dougwig> johnsom: tempest team is waiting on us to undo the mess of cloned tempest, but that mostly affects us being broken.
20:14:50 <TrevorV> Okay, what's the idea here.  We don't want tempest testing in tree?  Is that what this is supposed to fix?
20:15:11 <fnaval> the tests should still be in-tree; what i'm missing is what does the plugin accomplish?  i guess if we were testing internal implementations, it would be easy to plug in different test cases
20:15:20 <fnaval> since upstream would differ from down
20:15:34 <rm_work> it's for test discovery
20:15:38 <madhu_ak> nope. We can have tempest tree in our tree. rather than hardcoding the test patch in gate hooks, its best to have tempest plguin that wil discover the tests path automatically
20:15:38 <rm_work> i mentioned this at the midcycle
20:15:43 <dougwig> fnaval: we currently have half of tempest cloned into neutron-lbaas, interfacing with tempest-lib.  it's very brittle.
20:16:08 <madhu_ak> have tempest tests*
20:16:15 <johnsom> dougwig +1
20:16:36 <fnaval> so, the octavia tests are using tempest_lib; as I understand it now, there is a request to also use tempest plugins
20:16:41 <minwang2> the current octavia tempest test is more like a combination of tempest and tempest-lib
20:16:52 <xgerman> +1
20:17:04 <rm_work> having the plugin stuff done is a good idea IMO
20:17:14 <ptoohill> fnaval: has put quite a bit of work in from where min left off
20:17:21 <ptoohill> im not sure thats the case anymore minwang2
20:17:23 <sbalukoff> Yeah.
20:17:24 <rm_work> i just wasn't sure of the work effort involved -- if madhu_ak knows how to do it, and can easily do so, i say go for it
20:17:40 <dougwig> my latest understanding is that tempest-lib is going away in favor of 'tempest' being in pypi, btw.  so plugins are the future.
20:17:43 <rm_work> we'll merge what makes sense
20:17:55 <madhu_ak> yep. lets go for it.
20:17:58 <xgerman> +1
20:18:02 <TrevorV> rm_work I'm not sure what we're talking about changing... Is it like adding a line above each test or a header in each file that allows tempest to magically pick up what tests its supposed to run?
20:18:08 <fnaval> if you can do it with neutron-lbaas or direct me to an example implementation using Tempest plugins with FwaaS, I can research it
20:18:08 <blogan> dougwig: but tempest-lib was the future
20:18:11 <fnaval> madhu_ak:
20:18:15 <rm_work> it's kinda like how devstack-plugin works
20:18:21 <blogan> dougwig: who's to say tempest in pypi isn't an alternate future as well
20:18:25 <rm_work> it just changes how the gate scripts are written to simplify stuff
20:18:28 <dougwig> it's openstack, the future changes before it arrives.
20:18:37 <xgerman> yeah, there is a reason I was advocating rally
20:18:44 <ptoohill> lol
20:18:48 <rm_work> >_<
20:18:52 <madhu_ak> sure, fnaval I shall post some links to keep moving forward
20:19:06 <johnsom> I am just looking for the alternate future where we have good scenario test coverage and it's clear for people how to write new tests.
20:19:14 <fnaval> madhu_ak: thanks. any help is appreciated
20:19:15 <xgerman> +1
20:19:26 <blogan> johnsom: that is the future that shall not exist
20:19:26 * johnsom throws rally eraser at xgerman
20:19:28 <minwang2> tempest-lib mostly overwrites a lot of the methods in tempest, in octavia some of the methods cannot call it directly from tempest-lib, that’s why we can see the combination of tempest and temespt-lib
20:19:49 <sbalukoff> haha
20:19:53 <fnaval> what is up for review totally works now, but if we have the time to refactor it again, I can definitely do that.  but I strongly think it should be a future task.
20:20:05 <madhu_ak> +1 fnaval
20:20:19 <sbalukoff> That sounds reasonable to me.
20:20:30 <blogan> yeah
20:20:37 <madhu_ak> however tempest_plugon implementation will no way affect fnaval's patch
20:20:48 <johnsom> Ok, so the plan for Mitaka is move forward with the work fnaval has been doing (Thank you!) and for Newton look at tempest plugin?
20:20:57 <fnaval> cool madhu_ak
20:21:05 <dougwig> this is openstack, we should -2 it because someone might someday do it differently.
20:21:14 <ptoohill> +1
20:21:28 <sbalukoff> Haha!
20:22:00 <johnsom> dougwig Let's vote on the -2
20:22:01 <rm_work> johnsom: it's not really a big deal, we can get it in now probably if it's ready
20:22:09 <rm_work> it doesn't affect the actual test code
20:22:17 <madhu_ak> +1 rm_work
20:22:23 <rm_work> they're ... really fairly unrelated as much as that's possible
20:22:25 <johnsom> Ok, just trying to understand what was decided.  So, parallel development?
20:22:34 <johnsom> Ok, got it.
20:22:53 <xgerman> who has the action?
20:22:55 <minwang2> how it would be better for us to review, how much workload for this plugin  madhu_ak
20:23:16 <johnsom> #agreed Move forward with current scenario tests, in parallel madhu_ak will implement tempest plugin for Octavia
20:23:18 <madhu_ak> plugin implementation can take a day or two to push a patch
20:23:27 <madhu_ak> agreed
20:23:38 <minwang2> cool
20:23:38 <xgerman> #action made_ak add plugin code
20:23:41 <johnsom> Nice.
20:23:50 <fnaval> cool - please involve me with that madhu_ak as I want to know more about how that works
20:23:56 <TrevorV> xgerman mispelling names since 2014
20:24:03 <johnsom> #topic Octavia models
20:24:03 <madhu_ak> sure fnaval
20:24:07 <sbalukoff> Haha!
20:24:09 <fnaval> thanks madhu_ak
20:24:11 <xgerman> lol
20:24:11 <madhu_ak> heh
20:24:11 <johnsom> Ok, I will start, RED
20:24:39 <johnsom> No, just kidding.  So we had a loooooonnnnggg discussion about the models Friday.  Monday I think they got fixed.
20:24:48 <johnsom> Is there more we need to discuss here?
20:24:59 <sbalukoff> Not until after Mitaka, I think.
20:25:07 <rm_work> assuming the fix works :)
20:25:12 <sbalukoff> So long as L7 doesn't get blocked because of it. ;)
20:25:12 <blogan> maybe discuss the possibility of moving away from the static data models, to just using the SA models throughout
20:25:19 <xgerman> can we get some ERD diagram
20:25:20 <xgerman> ?
20:25:23 <johnsom> Ok, yeah, there was some talk of moving to sqlalchemy models.  Definitely post Mitaka
20:25:28 <blogan> yep
20:25:43 <dougwig> we should vote on when to discuss votig.
20:25:50 <sbalukoff> Haha
20:25:57 <johnsom> dougwig don't tempt me
20:26:09 <xgerman> well, I like to have some more architecture docs —
20:26:30 <johnsom> Ok, so we are cool on the models and order for the Mitaka release.  My testing so far today looks good.
20:26:58 <johnsom> Probably a few more little bugs (need to check the peer ports bana_k mentioned), but we should be able to fix those.
20:26:59 <sbalukoff> Yay!
20:27:24 <johnsom> #topic Ideas for other options for the single management network (blogan)
20:27:35 <johnsom> blogan You have the floor
20:27:54 <dougwig> ssh driver!
20:28:03 <sbalukoff> dougwig: Almost dead!
20:28:07 <xgerman> is dougwig a bot?
20:28:10 <johnsom> dougwig It's dead!!!!  sbalukoff is an over-achiever
20:28:24 <dougwig> dougwig got up early today, so he closely approximates a bot, but less useful.
20:28:34 * mhayden finds this topic intriguing ;)
20:28:35 <johnsom> Well, I haven't reviewed yet, but it warmed my heart to see the patch
20:28:51 <blogan> oh god sorry
20:28:53 <blogan> im back
20:28:56 <sbalukoff> johnsom: I'm probably missing something important, because it was way too easy to kill. :/
20:29:13 * dougwig mutters about over-complicated over-engineering, and picks up his red stapler.
20:29:18 <johnsom> blogan You have the floor for your mgmt-net topic
20:29:21 <Apsu> No more ssh driver makes me happy after seeing how it did its business...
20:29:32 <blogan> so roiginally the single mgmt net was a solution that either meant deployers use a provider network for it, or we come up with something better in the future
20:29:43 <bharathm> sbalukoff: I commented on your ssh-driver removal patch
20:29:45 <blogan> well we should start coming up with an optional better way to do it
20:29:56 <sbalukoff> blogan: Agreed!
20:30:45 * rm_work still likes the ssh-driver <_<
20:30:53 <blogan> one way is just create a mgmt net for eveyr lb and connect all controllers to this mgmt net, that has scaling issues but can be solved with more complicated clustering
20:30:56 * sbalukoff beats rm_work with a stick.
20:31:21 <sbalukoff> blogan: What's the problem we're trying to solve by making that change?
20:31:28 <johnsom> The idea I have had is to have one or more controller networks, that are routable with many amphora mgmt networks, likely managed by housekeeping.
20:31:34 <mhayden> blogan: makes me wonder if we could use IPv6 to help with the scalability somehow
20:31:44 <blogan> antoher is what johnsom suggested yesterday in that we have a single controller network that every controller is plugged into, a router is also plugged into it, and each lb has its own network plugged into the router
20:31:51 <johnsom> mhayden Yes, IPv6 is good
20:31:52 <sbalukoff> mhayden: +1
20:31:58 <xgerman> +1
20:32:02 <mhayden> a /64 subnet would serve a LOT of LB's :)
20:32:19 * mhayden skips the math
20:32:19 <xgerman> well, I think we just support a list of management networks and leave it to the operator to set that up
20:32:23 <johnsom> blogan LB, tenant, or some number of amphora
20:32:32 <blogan> mhayden: the scale problem for option 1 is the number of ports is multiplied by the number of controllers, give or take a few
20:32:37 <sbalukoff> I like johnsom's idea. Means the controller doesn't have to re-bind when a new loadbalancer is deployed.
20:32:53 <mhayden> blogan: ah, do we have a neutron limitation when we have too many ports on a particular subnet/network?
20:33:07 <blogan> mhayden: in our public cloud yes :)
20:33:08 <xgerman> I think we should leave it to the operator and just support a way for them to tell us where to plug
20:33:09 * johnsom notes sbalukoff likes my idea in the history book
20:33:15 <blogan> we have a limit per network, and a global limit
20:33:15 <sbalukoff> Haha!
20:33:31 <mhayden> what if an LB only had connectivity in the tenant's network?
20:33:37 <Apsu> Seems like I'll be in the minority, but I'd prefer to not have a management "network" at all. Management in Neutron across basically every other service is out-of-band, on purpose.
20:33:40 <mhayden> we'd have issues reaching out to it from wherever octavia is running (possibly)
20:33:48 <johnsom> Some folks have mentioned number as low as ~250 ports per network.
20:34:19 <TrevorV> Apsu how would you connect to a LB for maintenance tasks or something similar?  Over the customer network?
20:34:20 <blogan> Apsu: yeah another option is realize a way to meet our requirements without a mgmt network, if its possible
20:34:51 <xgerman> well, can we agree that having only one management net si limiting?
20:34:54 <mhayden> perhaps octavia could have an agent/worker of some sort that sits on the tenant network? as routers do today
20:35:06 <blogan> xgerman: a pool of mgmt networks, how would that be different than a mgmt network for each lb
20:35:06 <Apsu> TrevorV: Well there's a few options for getting data in/out without having to couple the data and control plane networks. Metadata, mountpoints, agent/worker on tenant (per mhayden), etc.
20:35:07 <sbalukoff> xgerman: +1
20:35:17 <blogan> xgerman: yes we can if its not a provider network
20:35:57 <mhayden> if we put the LB *only* on the tenant network, we would be eating additional IP addresses in the tenant network to support the LB, which is annoying :/
20:36:15 <johnsom> So, this is a Newton project.  How do you all want to start working on it?  An etherpad for ideas?
20:36:23 <TrevorV> mhayden isn't that also the case with an agent/worker on their network?
20:36:29 <sbalukoff> mhayden: LB in the tenant network probably wouldn't happen with active-active.
20:36:30 <Apsu> mhayden: Yep. Another reason I don't like having a network for management whatsoever, insofar as "network" means "something neutron/nova configure out of the cloud's resources"
20:36:35 <mhayden> TrevorV: indeed it is :/
20:36:37 <xgerman> johnsom sounds goopd
20:37:03 <rm_work> we had discussed putting an agent on the *VM host*, don't know where that went or if it's feasible in real deployments (might not be)
20:37:04 <sbalukoff> Apsu: i don't see that as a show-stopper. We're consuming resources on the cloud by launching amphorae.
20:37:09 <mhayden> or another idea is to build a couple of LB VM's per tenant and put containers/namespaces in that VM
20:37:17 <mhayden> but we could be putting a lot of eggs into one basket
20:37:26 <rm_work> yeah, not a huge fan of that
20:37:29 <TrevorV> Same
20:37:40 <sbalukoff> Yep.
20:37:42 <Apsu> sbalukoff: Sure. Personally I'd prefer to see a namespace driver, but that's probably also a minority opinion.
20:37:55 <Apsu> On the plus side, it gives you the OOB control for free, given shared process namespace.
20:37:55 <sbalukoff> Apsu: Namespace driver doesn't scale.
20:38:05 <TrevorV> rm_work the agent per VM host still communicated over a network we built for the Octavia service, which still counts as a "management network" of sorts
20:38:08 <sbalukoff> The whole *point* of Octavia is scale.
20:38:45 <Apsu> sbalukoff: Well, it's not as automatic to scale if you equate namespaces with "don't run on compute nodes", sure
20:38:52 <dougwig> i don't think we should spend any more time on the namespace driver. it's not the ref. if someone wants to take it on, fine.
20:38:59 <Apsu> But nothing prevents you from using network namespaces on compute nodes, without actually running in VMs.
20:39:03 <xgerman> dougwig +1
20:39:18 <johnsom> Ok, let's collect ideas in an etherpad
20:39:22 <johnsom> #link https://etherpad.openstack.org/p/Octavia_Mgmt_Net_Ideas
20:39:45 <sbalukoff> Apsu: I think that's really far from the direction we were planning on taking with Octavia. Far enough to be its own load balancer project, perhaps.
20:39:55 <mhayden> could we formally deprecate the namespace driver in mitaka/newton for lbaasv2?
20:40:14 <xgerman> that has been controversial since it has it’s fnas
20:40:21 <xgerman> fans
20:40:27 <johnsom> mhayden It would probably need to be O since is is used
20:40:28 <Apsu> sbalukoff: Fair enough. I'll leave that alone. I still think OOB comms is a worthwhile pursuit
20:40:35 <sbalukoff> mhayden: I would like to; I think we need to make sure that Octavia deployment is simpler and performance is definitely at or better than namespace to do that.
20:40:53 <dougwig> i'd vote to deprecate and let someone maintain in a separate repo if they want.
20:41:02 <xgerman> dougwig +1
20:41:05 <sbalukoff> dougwig: +1
20:41:20 <xgerman> so #startvote
20:41:28 <blogan> i still kind of like having it as a simpler driver
20:41:31 <mhayden> i'm happy to help with some octavia docs, but i'll need some help on that to get the concepts right
20:41:42 <johnsom> Sounds like another agenda topic to cover at the next meeting?  I would like to advertise that one before a vote.
20:41:49 <sbalukoff> mhayden: I'll do my best to help you there.
20:41:50 <xgerman> ha
20:41:51 <sbalukoff> Please ping me.
20:41:52 <blogan> with new features, right now we have to implement it in octavia and neutron-lbaas, which sucks
20:42:04 <johnsom> #topic Do we need an admin tenant? (blogan)
20:42:35 <johnsom> Please update the etherpad with mgmt-net ideas.  Moving on to blogan's next gernade
20:42:49 <sbalukoff> Haha! Ok.
20:42:51 <blogan> yeah so neutron has rbac now, which allows a user to say this tenant can create ports on my network
20:42:53 <blogan> basically
20:42:57 <mhayden> to be fair, i pushed blogan to throw these grenades :P
20:43:02 * blogan throws grenade at johnsom
20:43:38 <blogan> mhayden: well they're good topics anyway, the mgmt net needs improving, and if people are adverse to having an admin account then it should be noted at least
20:43:49 <johnsom> Sounds like the ACL fun we are having with barbican
20:44:08 <sbalukoff> Yep.
20:44:16 <blogan> johnsom: yep, however, we could just use an admin account to only set the rbac policy to allow a normal octavia user to do it
20:44:21 <xgerman> Yeah, BBQ is a time sink for me
20:44:27 <rm_work> yeah I don't think ACLs are working properly in consumers still
20:44:28 <blogan> so it lessens the admin calls, but still would require an admin account
20:44:38 <dougwig> blogan: we don't gate on it or test it *at all*.  that sounds deprecated to me.
20:44:48 <rm_work> someone needs to poke at some barbican devs, i haven't had time/priority to look at it
20:44:54 <rm_work> i had a patch up a while back
20:45:08 * blogan hops in his time machine to reply to dougwig's comment
20:45:28 <xgerman> then there are nova quotas which require an admin count to be changed
20:45:58 <blogan> xgerman: true, so there's probaby a lot of little gotchas like that
20:46:05 <xgerman> yep
20:46:11 <blogan> but in that case, that should be the deployer who ups those quotas for the octavia user
20:46:21 <xgerman> and this is where I have spend some of my time recently ;-)
20:46:32 <blogan> but octavia will automatically do security group rules, which may require admin access, i honestly can't remember
20:46:43 <xgerman> yep, we do
20:46:52 <johnsom> blogan If we still need an admin account, what is it really buying us?
20:46:57 <TrevorV> Okay so the answer is, "yes we need an admin tenant"
20:46:59 <TrevorV> Is that it?
20:47:17 <xgerman> well, you could make roles in kesytone
20:47:25 <xgerman> there is admin stuff we don’t need
20:47:33 <blogan> johnsom: the point of tihs discussion is if htere's a way to not have an admin account we should use it and rbac seemed like a possibility but sounds like there's a lot of gotchas on it
20:47:42 <bharathm> We can have a different tenant.. but admin role to do all the above
20:48:03 <Apsu> Role membership seems like a reasonable middle ground. Least privilege required and such
20:48:11 <sbalukoff> Is this because permissions on "admin" stuff aren't granular enough?
20:48:54 <blogan> could be, and if thats the case then the roles and policy can be done by deployers
20:49:05 <blogan> and this is a moot subject
20:49:15 <blogan> if thats the solution
20:49:15 <TrevorV> A better question for me is, why don't we want admin tenant if we can use it?
20:49:32 <xgerman> if octavia gets compromised...
20:49:48 <blogan> the more admin accounts, the greater the attack vectors
20:49:50 <sbalukoff> TrevorV: "Least privilege" is a basic security requirement in most places.
20:49:52 <ptoohill> disable octavia service account?
20:49:52 <Apsu> ^
20:50:30 <johnsom> blogan Newton time frame?
20:50:48 <sbalukoff> johnsom: +1
20:50:51 <blogan> johnsom: or its a deployer problem
20:50:53 <johnsom> or is this something that is more urgent for folks?
20:50:58 <blogan> johnsom: but yeah never intended to be for Mitaka
20:51:04 <johnsom> Ok
20:51:09 <blogan> would require too much work
20:51:42 <xgerman> deployer problem
20:51:46 <johnsom> I think the unique tenant/account with roles might be a good way to go.  I'm not sure about the RBAC stuff as I haven't really looked at it.
20:52:15 <johnsom> #topic LBaaS tests with OVN in the gate?
20:52:27 <johnsom> Someone added this one to the agenda
20:53:11 <johnsom> Anyone claim it?  dougwig?  You seem to like more gates....
20:53:36 <dougwig> any context on this topic?
20:53:54 <johnsom> Just the line someone added to the agenda.  First I have heard of it
20:53:55 <blogan> i have no experience with OVN mitts
20:54:14 <sbalukoff> Huh.
20:54:26 * mhayden gets it
20:54:29 <blogan> lol
20:54:31 <blogan> ovn = oven
20:54:35 <johnsom> Ok, if nobody claims it, I will declare it's a drive-by OVN
20:54:40 <sbalukoff> Haha!
20:54:41 * mhayden resists posting ascii-art
20:54:48 <Apsu> Drive-by baking
20:54:49 <johnsom> #topic Open Discussion
20:55:15 <sbalukoff> xgerman: What was the name of that distro to look into to try to cut down on amphora image size?
20:55:17 <sbalukoff> Alpine?
20:55:24 <sbalukoff> What's its license?
20:55:26 <xgerman> yep, alpine
20:55:41 <mhayden> could always go with something like RancherOS and containerize ;)
20:55:42 <Apsu> sbalukoff: What distro is it based on currently?
20:55:46 <xgerman> #link http://www.alpinelinux.org
20:55:48 <sbalukoff> Apsu: Ubuntu.
20:55:51 <sbalukoff> Ok.
20:55:54 <sbalukoff> I'll have a look.
20:55:55 <Apsu> sbalukoff: Have you seen Ubuntu Core?
20:56:03 <xgerman> yeah, it’s on my list
20:56:07 <mhayden> or CoreOS? :)
20:56:07 <xgerman> alpine
20:56:17 <sbalukoff> mhayden: Eventually we do want containers. But it sounds like that's going to take some work.
20:56:22 <blogan> coudl go with red star os too
20:56:30 <johnsom> Or clearOS
20:56:45 <sbalukoff> I'd just like to get to a VM that boots a lot quicker even with vmx.
20:56:55 <xgerman> well, one negative from the last lab was memory consumption — I hope alpine can help
20:56:59 <Apsu> sbalukoff: Would probably be the least movement from the current image, and their goal is tiny footprint. It's the basis for the new transactional pkg mgmt they're working on, too
20:57:00 <sbalukoff> Yep.
20:57:08 <Apsu> But seems like it might be a good candidate
20:57:23 <sbalukoff> Apsu: Thanks for the recommendation, eh!
20:57:29 <johnsom> Oh, I guess it's clear linux, not clearos.  The distro Intel was pitching in Vancouver
20:57:32 <Apsu> https://wiki.ubuntu.com/Core yep
20:57:45 <xgerman> yeah, I wish that disk image build had some good tiny ones build in
20:58:03 <sbalukoff> Also: does anyone here know enough about virtualization tech to know whether using a 32-bit image would actually gain us anything?
20:58:15 <sbalukoff> (As far as resource footprint)
20:58:23 <johnsom> I tried the ubuntu core DIB element about six months ago.  It was broken.
20:58:43 <sbalukoff> (since I think it's unlikely we'll need to support >4GB haproxy processes for the forseeable future...)
20:59:02 <sbalukoff> johnsom: aah, good to know.
20:59:03 <johnsom> sbalukoff 32bit would get us a smaller on disk footprint.
20:59:18 <sbalukoff> johnsom: But on a 64-bit host, doesn't save us ram?
20:59:29 <Apsu> It wouldn't on modern processors, tmk. The virt engines actually work harder to run 32-bit instructions because (afaik) they use 64-bit for all memory addressing regardless of instruction size and class.
20:59:39 <sbalukoff> Ok.
20:59:45 <sbalukoff> So, don't bother with 32-bit. :/
20:59:48 <Apsu> So it's a conversion on 32-bit instructions. I could be wrong, but that's how I understand it.
20:59:50 <johnsom> sbalukoff probably.
21:00:10 <sbalukoff> I'll ping Dustin, as he keeps up on this far better than I...
21:00:14 <johnsom> Ubuntu core could be worth looking at it again.  It just wasn't working last time I tried it.
21:00:33 <sbalukoff> johnsom: Ok!
21:00:56 <Apsu> I suspect talking with Canonical and mentioning the intended use of Core would probably get some hands to help make it right from their side, if it's broken
21:01:02 * Apsu shrugs
21:01:10 <johnsom> I don't see a DIB element for alpine yet, so might be some work.
21:01:15 <johnsom> Ok, that is time.
21:01:18 <sbalukoff> Yeah.
21:01:19 <johnsom> #endmeeting