20:00:05 <johnsom> #startmeeting Octavia
20:00:07 <openstack> Meeting started Wed Mar  1 20:00:05 2017 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:10 <openstack> The meeting name has been set to 'octavia'
20:00:11 <xgerman> o/
20:00:33 <diltram> o/
20:00:37 <johnsom> Hi folks
20:01:21 <nmagnezi> o/
20:01:40 <johnsom> #topic Announcements
20:01:52 <johnsom> The first OpenStack PTG was last week
20:02:28 <johnsom> We had pretty good representation for Octavia with five regular team members present
20:02:46 <xgerman> +1
20:03:09 <johnsom> I tried to keep notes on the etherpad as I attended meetings or we discussed items from the etherpad
20:03:17 <johnsom> #link https://etherpad.openstack.org/p/octavia-ptg-pike
20:03:27 <johnsom> I might clean that up a bit if I get time later today.
20:03:39 <johnsom> Highlights...
20:04:07 <johnsom> We met with barbican and it seems like the cascading ACLs can happen.  I opened a bug to track it
20:04:22 <johnsom> #link https://bugs.launchpad.net/barbican/+bug/1666963
20:04:22 <openstack> Launchpad bug 1666963 in Barbican "Enable cascading ACLs based on container ID" [Undecided,New]
20:04:45 <johnsom> Lots of folks are interested in using octavia in container environments
20:04:57 <johnsom> We also had a lot of interest in amphora in containers
20:05:36 <johnsom> We had a good discussion about the state of testing in octavia and a path forward there
20:06:04 <johnsom> We also highlighted some much needed documentation and put names to those
20:06:34 <johnsom> I want to discuss the OpenStack client part later in the agenda so I will wait on that.
20:06:42 <johnsom> Any questions about the PTG or the notes?
20:07:04 <nmagnezi> not at the moment
20:07:10 <nmagnezi> maybe after your cleanup :)
20:07:19 <johnsom> Hahaha, yeah, it got messy
20:07:25 <nmagnezi> indeed :D
20:07:31 <johnsom> So many discussions going on....
20:07:46 <nmagnezi> yeah.. sorry i couldn't be there
20:08:31 <m-greene> py35 is a pike goal, or queens?
20:08:40 <diltram> pike
20:08:47 <johnsom> Other announcements, I have put in a request for new git repositories: octavia-dashboard (for renaming and migrating the dashboard), python-octaviaclient (OSC plugin), and octavia-tempest-plugin (tempest plugin)
20:08:50 <m-greene> I didn’t see that in the notes, though we discussed
20:09:13 <johnsom> #link https://governance.openstack.org/tc/goals/pike/index.html
20:09:13 <diltram> johnsom: how about the namespace driver
20:09:17 <diltram> ?
20:09:23 <m-greene> thx
20:10:06 <johnsom> I have also put in to migrate octavia over to cycles-with-milestones release cycles.  This will make i18n, packaging, and some of our end of cycle steps easier.
20:10:55 <johnsom> diltram Good point, I will need to do that as well
20:11:26 <diltram> and how about this dashboard?
20:11:32 * johnsom thinks he really should clean up the notes....
20:11:32 <diltram> we need to move the code on our own?
20:11:45 <nmagnezi> johnsom, if I may expand on diltram's first question: what about 3rd party drivers support in general?
20:11:47 <johnsom> diltram Yes, I will take care of that
20:12:22 <diltram> nmagnezi: we gonna support drivers
20:12:49 <johnsom> nmagnezi We did talk about that.  It is an open rfe/bug for the lbaas-merge work.  We just need to get the base API merged before that task can start.
20:12:52 <diltram> probably I will be reponsible for delivering those drivers api
20:13:13 <nmagnezi> got it. thanks :)
20:13:45 <johnsom> One item that did come out of the PTG for the drivers is we will be adding an endpoint to the health manager to allow the drivers to submit status and stats.  This should allow for good scalability
20:15:01 <johnsom> Ok, let's move on
20:15:12 <johnsom> #topic Brief progress reports / bugs needing review
20:15:32 <johnsom> I think we have cleared up the two-three gate issues that were bugging us.
20:16:03 <johnsom> It sounds like the qa folks are starting to work on the devstack issue again, so maybe we can pull out that work around soon.
20:16:59 <johnsom> I am also continuing to work on our API-REF docs
20:17:18 <johnsom> Any other notable progress?
20:17:35 <nmagnezi> is that the time to mention new bugs?
20:17:57 <johnsom> Sure, if there are bugs you would like to bring attention to, please do
20:18:04 <nmagnezi> yup
20:18:18 <nmagnezi> just one bug i have found today
20:18:19 <nmagnezi> https://bugs.launchpad.net/octavia/+bug/1669019
20:18:19 <openstack> Launchpad bug 1669019 in octavia "The gates are not testing the latest amphora-agent code" [Undecided,New]
20:18:50 <nmagnezi> i basically gave as many details as a could. and IMHO it is important to resolve this
20:18:52 <johnsom> I saw that this morning, but haven't yet looked deeper into it
20:19:03 <nmagnezi> if we can agree on "how" I can submit the patch
20:19:35 <nmagnezi> in short, the agent that is being run inside the amp instance does not include the patch that should be tested
20:19:41 <johnsom> nmagnezi We did get switched over to Python3.5 in the amphora due to the DIB changes in Ocata
20:20:10 <nmagnezi> ah. ok so that is expected. just brought it up because i wasn't sure
20:20:18 <nmagnezi> so good to know.
20:21:14 <diltram> can we support one more parameter to decide how we're gonna install amphora-agent code?
20:21:17 <johnsom> Yeah, we were not expecting it, but it happened, so we adapted.  You will see we have a number of py3x gates now too (more needed actually).  Pike has a goal for full py3x testing and support.
20:22:17 <nmagnezi> johnsom, devstack is going to use python3 as well? (when it is starting the openstack services)
20:22:26 <johnsom> We do override the amphora-agent install to pickup the checked out version.  It's a bit strange to follow.  The element says master, but in reality for the devstack, we override it to the current patch
20:23:34 <johnsom> nmagnezi yes, there is the USE_PYTHON3=True setting for localrc.  But note, if you set this you cannot just un-set it and have devstack work with python2.7 again
20:23:39 <johnsom> It is not clean
20:24:07 <nmagnezi> johnsom, noted. thank god for snapshots :)
20:24:21 <johnsom> +1 to that
20:24:44 <johnsom> I will dig a bit deeper on that bug after meeting.
20:24:54 <nmagnezi> great
20:24:57 <nmagnezi> thank you
20:25:34 <johnsom> Any other progress to discuss or bugs to highlight?
20:25:54 <johnsom> #topic Octavia team mascot
20:26:05 <johnsom> #link https://etherpad.openstack.org/p/octavia-mascot
20:26:23 <johnsom> Well, we got a little "input" from our designate friends.....
20:26:56 * nmagnezi reads
20:27:26 <johnsom> I am thinking we leave this open for ideas for another week, then next week I will ask for votes
20:27:39 <diltram> +1
20:28:41 <johnsom> I will see if I can come up with a ranked voting thing that isn't over complicated.  Otherwise it will be +1s on the etherpad
20:29:37 <johnsom> #topic OpenStack Client (OSC) commands for octavia
20:30:14 <johnsom> At the PTG I spent some time in OSC room discussing our client need for Pike
20:30:49 <johnsom> Dean Troyer was very helpful and supportive
20:31:17 <johnsom> It was agreed that we must put our commands in a OSC plugin repo
20:31:52 <ankur-gupta-f4> which I don't like or agree with. But he's the big boss
20:31:58 <johnsom> I think ankur-gupta-f4 was thinking we could do as neutron did and put them in tree, but that sounded like a no-go from the room
20:32:23 <johnsom> I kind of like having it under our control
20:32:36 <xgerman> yeah, having our own repo shields us from being accidentially deleted
20:32:42 <sindhu> +1
20:32:42 <johnsom> +1
20:32:58 * johnsom thinks "security groups"
20:33:32 <johnsom> We also talked about the new "terminology" that is being used for OSC
20:33:44 <m-greene> do we have an octavia namespace in github to at least create a fork from the OSC repo?
20:34:05 <johnsom> The command layout folks liked in the room (myself included) are:
20:34:15 <nmagnezi> this is actually the first time i hear about the octavia client. I'm not sure I'm following on the disagreement ankur-gupta-f4 had with  Dean
20:34:26 <ankur-gupta-f4> question then becomes do you want to have our python-octaviaclient just contain OSC plugin commands or should we also bring up a pure octavia client. So users can run 'octavia * create' and 'openstack * create'
20:34:30 <johnsom> m-greene I have put in to create our OSC plugin repo.  It will not be a fork however, just a plugin.
20:35:28 <ankur-gupta-f4> i.e. something like this https://github.com/openstack/python-neutronclient/tree/master/neutronclient/osc
20:35:38 <m-greene> got it.  I was thinking that we’ve had this problem too, and have had to either restore from someone’s fork, or contact github and ask them to “undelete” (which sometimes works)
20:35:39 <johnsom> ankur-gupta-f4 For Pike I am mostly interested in just the OSC plugin.  I guess we could consider a native client in the future, but not sure that is the direction OpenStack is going.
20:36:00 <ankur-gupta-f4> okay sounds good and achievable for Pike
20:36:36 <johnsom> Excellent
20:36:56 <johnsom> Anyway, the commands we discussed and I would like the team's feedback on:
20:37:43 <johnsom> "openstack loadbalancer create ..."
20:37:44 <johnsom> "openstack loadbalancer listener create ..."
20:37:44 <johnsom> "openstack loadbalancer pool create ..."
20:38:01 <johnsom> It has tab completion, so that at least speeds it up
20:38:06 <ankur-gupta-f4> +1
20:38:28 <ankur-gupta-f4> we would own the loadbalancer namespace within OSC
20:38:29 <johnsom> basically our stuff would live under the "loadbalancer" namespace
20:38:35 <johnsom> Yep
20:38:47 <ndahiwade> +1
20:38:49 <nmagnezi> are the old commands (lbaas-loadbalancer-create for example) going to be deprecated in Pike?
20:38:58 <rm_work> those don't exist in OSC
20:39:15 <rm_work> and the neutronclient is ALREADY deprecated, right?
20:39:19 <xgerman> yep
20:39:20 <ankur-gupta-f4> yea
20:39:22 <diltram> yes
20:39:35 <johnsom> Yes, as soon as we have a replacement available we can mark the old commands deprecated (though neutron kind of already did that to us).
20:40:40 <johnsom> Any other thoughts/comments on that?
20:41:12 <rm_work> I wish we could alias "lb" namespace too
20:41:19 <rm_work> but i guess if tabcomplete ALWAYS works?
20:41:39 <rm_work> i assume it's only if it installs the bash-completion stuff correctly (and you are using bash?)
20:42:03 <johnsom> Yeah, the alias is an interesting question.  I'm not sure about that.  Though "lb" might be confusing for the neutron lbaas v1 hold outs
20:42:41 <johnsom> Just to be clear, there will not be support for LBaaS v1 API in the OSC
20:43:02 <johnsom> rm_work you can run "openstack" and get an interactive environment as an option...
20:43:25 <rm_work> ah, true
20:43:53 <johnsom> Ok, one last item on my agenda and the open discussion
20:44:01 <johnsom> #topic Proposed Health Manager endpoint for provider health/stats reporting
20:44:40 <johnsom> At the PTG we discussed adding another endpoint to the health manager processes so that the drivers can post status/stats updates.
20:45:09 <johnsom> This would be similar to how the amps report in their health heartbeats.
20:45:48 <johnsom> I am thinking something simple like another UDP message format, similar signing.
20:46:01 <johnsom> Any comments on that?
20:46:10 <xgerman> can we see a spec?
20:46:14 <johnsom> Are there any other vendor driver folks here?
20:46:26 <rm_work> is that something the vendors are going to want to deal with?
20:46:32 <johnsom> Yeah, it would need a spec.  Plus I want to write up a "how to write a driver" doc
20:46:40 <xgerman> rm_work yes
20:46:43 <rm_work> I mean UDP
20:46:57 <xgerman> they asked for a similar functionality to neutron agent-list which tell you the status of agents
20:47:04 <rm_work> ah, yeah ok
20:47:18 <johnsom> rm_work Well, currently they are reaching into the neutron DB to post this.  I want a scalable way for them to do it that doesn't mean reaching into the DB
20:47:59 <xgerman> ah, ok, got confused… the one I mentioned was <the other> health
20:48:01 <rm_work> i just wonder if they'd be more happy about something like just another REST call
20:48:14 <johnsom> If we just expose a callback in the API process for the drivers, it is limited to the number of API processes deployed, which I would expect would be a smaller number than the HM would be
20:48:33 <rm_work> since they wouldn't need it to deal with failover, the timing would be less of an issue, and it might also be batched
20:48:37 <johnsom> We could do a full REST
20:48:38 <xgerman> most installations I know scale them lineary
20:48:46 <johnsom> Might be a hammer for a fly though
20:48:57 <rm_work> well, not like we've been accused of that before :P
20:49:16 <rm_work> I'd just like to hear from some vendors first, before we go and implement something
20:49:30 <johnsom> Yep, thus the agenda item here....
20:49:38 <diltram> plus rm_work m-greene told us that they're really interested in using our internal
20:49:49 <diltram> internals*
20:49:59 <diltram> not just plain using of drivers api
20:50:19 <rm_work> k, i just don't see a lot of vendors at our meetings :P
20:50:29 <rm_work> might be a good thing for the ML
20:50:40 <johnsom> Yeah, sometimes people lurk.  I figured this is a good start
20:50:52 <diltram> yeah, in my opinion
20:51:07 <diltram> we can start with supporting one really interested vendor
20:51:20 <m-greene> right.. we need to evaluate how much of the octavia guts we can leverage to not reinvent the wheel.
20:52:10 <diltram> I prefer to help someone who is interested in this help that help all of completely not interested in it people
20:52:11 <rm_work> o/ kk
20:52:12 <diltram> and companies
20:52:13 <m-greene> probably health, but not housekeeping.. hence a way to post status/health to allow an operator to self-diagnose
20:52:15 <johnsom> Ok, at least the topic was brought up.  Next steps would be ML or a spec for people to comment on
20:52:26 <xgerman> +1
20:53:00 <johnsom> Anyone volunteering to start a spec?
20:53:18 * johnsom thinks it can't hurt to ask.....
20:53:43 <m-greene> i don’t know enough, plus hoping to join in on flavors and possibly gui
20:53:49 <johnsom> Don't trip stepping backward....  Grin
20:54:01 <johnsom> Ok, I will get to it soon-ish
20:54:20 <johnsom> m-greene those would be great
20:54:37 <johnsom> #topic Open Discussion
20:54:43 <m-greene> Rich and I posted comments on the flavors spec, not sure next steps
20:54:47 <xgerman> depending on my osa adventures I might be able to help
20:54:50 <johnsom> Since we have a few minutes left , any other items?
20:55:20 <xgerman> should we talk flavors?
20:55:24 <johnsom> m-greene Yeah, I'm not sure if the original poster is still able to work on that or not.
20:55:53 <johnsom> We have five minutes left.   Let's comment on the spec and put it on next week's agenda.
20:55:59 <xgerman> +1
20:56:10 <xgerman> also ACTIVE-ACTIVE
20:56:17 <johnsom> #link https://review.openstack.org/392485
20:56:36 <johnsom> Yeah Act/Act is another good one for next week
20:56:42 <xgerman> k
20:58:07 <johnsom> Done, on next weeks agenda
20:58:09 <m-greene> I am planning/re-planning my team’s work through June.  Is GUI or flavors more important to the community?
20:58:23 <xgerman> GUI always increases adoption
20:58:29 <xgerman> so I would vote GUI
20:58:42 <johnsom> Yeah, GUI is the mass market appeal
20:58:43 <diltram> +1
20:59:02 <m-greene> both are “high” value to me, but not sure we’d be able to tackel both technically.
20:59:05 <m-greene> ok
20:59:40 <xgerman> well, once the spec is ironed out we can see if somebody else can pick up flavors…
21:00:02 <johnsom> I would like to see progress on the flavors spec though.  We need some level of "flavors" in the API
21:00:18 <xgerman> +1
21:00:19 <johnsom> Ok, we are out of time today.   Thanks folks!
21:00:23 <xgerman> o/
21:00:27 <diltram> thx, cu
21:00:29 <nmagnezi> o/
21:00:30 <johnsom> #endmeeting