20:00:06 <johnsom> #startmeeting Octavia
20:00:07 <openstack> Meeting started Wed Sep  7 20:00:06 2016 UTC and is due to finish in 60 minutes.  The chair is johnsom. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:09 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:11 <openstack> The meeting name has been set to 'octavia'
20:00:14 <sbalukoff> Howdy folks!
20:00:20 <diltram> hey
20:00:21 <Frito> o/
20:00:24 <johnsom> Hi there
20:00:54 <dougwig> hiya
20:01:00 <johnsom> #topic Announcements
20:01:22 <TrevorV> o/
20:01:24 <xgerman> o/
20:01:24 <johnsom> I don't have any major announcements today, anyone else?
20:01:31 <sbalukoff> None for me.
20:01:33 <sbalukoff> Er...
20:01:39 <sbalukoff> Quick review of deadlines perhaps?
20:01:42 <ptoohill> o/
20:01:43 <Frito> Can I announce that there are no announcements?
20:02:06 <xgerman> knock yourself out
20:02:11 <TrevorV> Its a command ;)
20:02:13 <johnsom> We are in feature freeze.
20:02:20 <johnsom> #link https://releases.openstack.org/newton/schedule.html
20:02:45 <sbalukoff> How long until we have the docs freeze?
20:03:03 <johnsom> I would like to cut our Newton release the week of the 25th
20:03:09 <sbalukoff> Ok.
20:03:21 <sbalukoff> Sounds good to me.
20:03:23 <johnsom> Well, since we are doing docs in tree, that is up to us.
20:03:26 <nmagnezi> o/
20:03:37 <nmagnezi> here as well.
20:03:50 <sbalukoff> Well, I figure we should get as many docs in for the Newton release cut as we can.
20:03:53 <johnsom> They are low risk (though we will talk about that later on the agenda) so I feel they are like bug fixes
20:04:01 <sbalukoff> Ok.
20:04:04 <johnsom> That would be great!
20:04:59 <johnsom> I still have some open comments on the quick start guide (though it's like an overview, so maybe an open discussion topic) and will soon hit the L7
20:05:07 <sbalukoff> Yep.
20:05:17 <johnsom> #topic Brief progress reports / bugs needing review
20:05:33 <sbalukoff> I'm working with Leslie to see her availability to help with that. In any case, we'll get docs done, whether it's me or her who does them.
20:05:35 <johnsom> I have been working on bug fixes and reviews.
20:05:47 <johnsom> #link https://review.openstack.org/364707
20:05:50 <sbalukoff> Same here; Mostly reviews.
20:06:01 <TrevorV> #link https://review.openstack.org/#/c/360794
20:06:03 <johnsom> That one is up for review.  Fixes another admin_state_up issue
20:06:06 <TrevorV> Quotas review ready for consumption!
20:06:08 <TrevorV> WOOT
20:06:14 <sbalukoff> Cool beans.
20:06:16 <TrevorV> Just tested it at 3:01
20:06:26 <sbalukoff> I'll have a look at these later this week, eh.
20:06:43 <johnsom> FYI, quota's is a Ocata patch, so probably should be prioritized down a bit on the reviewers list.
20:06:51 <sbalukoff> I just slapped a +2 on this:
20:06:54 <sbalukoff> #link https://review.openstack.org/#/c/366690/
20:06:58 <johnsom> I hope we focus on bug fixes for Newton and docs.
20:07:14 <sbalukoff> johnsom: +1
20:07:22 <nmagnezi> sbalukoff, thanks so much for the quick review for this one ^
20:07:42 <sbalukoff> We're in the final stretch for cutting the Newton release. Let's try to make the stuff in it suck as little as possible. :)
20:07:47 <johnsom> Also, on the topic of bugs needing review.  sbalukoff I signed you up for these two:
20:07:57 <johnsom> #link https://bugs.launchpad.net/octavia/+bug/1620914
20:07:57 <openstack> Launchpad bug 1620914 in octavia "Active/Active specs are not passing the specs tox test" [High,New] - Assigned to Stephen Balukoff (sbalukoff)
20:08:03 <sbalukoff> johnsom: Ok.
20:08:05 <johnsom> #link https://bugs.launchpad.net/octavia/+bug/1618664
20:08:05 <openstack> Launchpad bug 1618664 in octavia "WARNING citation not found: P2 in Act/Act specs" [Low,New] - Assigned to Stephen Balukoff (sbalukoff)
20:08:23 <sbalukoff> I'll have a look and see about clearing those up.
20:08:29 <johnsom> The active/active spec is causing some problems.  I don't want to revert it, but we need to get that fixed.
20:08:45 <sbalukoff> Also-- it's annoying to find that we aren't running the specs test in any automated way. Mind if I make a patch to combine this with the docs tests?
20:08:54 <johnsom> The test doesn't like the additional sections to the spec (your notes section)
20:09:11 <johnsom> Please do!
20:09:32 <johnsom> Thanks for looking at those.
20:09:50 <sbalukoff> johnsom: Ok. I might look at making an update to our specs testing to allow for optional sections like that notes section. :
20:09:52 <sbalukoff> :/
20:10:16 <sbalukoff> Because there's a lot of good stuff there I don't want to eliminate. (Otherwise, I guess I could put this in a separate non-spec doc.)
20:10:22 <johnsom> Well, I think it is part of the standard OpenStack specs job, so, maybe just fix the doc?
20:10:40 <johnsom> It is probably mostly moving things and titles around
20:10:50 <sbalukoff> Eh...  we'll see.
20:11:08 <sbalukoff> I think there's discussion there that's not usually captured in most specs.
20:11:31 <sbalukoff> So it might work better in a separate part of our documentation entirely. Maybe a "design discussion" document or sometihng.
20:11:46 <johnsom> I agree and commented as such on the patch, but trying to be flexible.  I see value in that info.
20:11:55 <sbalukoff> Ok, cool.
20:11:56 <johnsom> That works for me too.
20:12:09 <johnsom> #topic Patches requiring discussion/consensus
20:12:31 <johnsom> Any patches we need to discuss as a team (other than the two I put on the agenda)?
20:12:53 <johnsom> #topic Discuss nested virtualization on the infra gates
20:13:16 <sbalukoff> Woot! We got nested virt on the gates?
20:13:24 <johnsom> Ok, moving on.  So, I saw that a few hosting providers enabled nested virt on the gate hosts (OSIC and bluebox for example)
20:13:34 <johnsom> See the two jenkins runs here:
20:13:36 <sbalukoff> That's going to be huge, and should very much speed up our run times.
20:13:41 <johnsom> #link https://review.openstack.org/#/c/366472/
20:14:05 <johnsom> So if you look at the two jenkins runs, the first run was on a non-nested virt host, the second on an OSIC host.
20:14:14 <sbalukoff> Wow!
20:14:26 <johnsom> It removed almost an hour from our scenario tests.
20:14:49 <johnsom> That is the good news.  Bad news is that not all of the providers are running nested virt.
20:15:01 <sbalukoff> Er... we'll take what we can get?
20:15:03 <johnsom> Also, Infra still has FUD about it.
20:15:10 <sbalukoff> Bah!
20:15:25 <sbalukoff> Damn the torpedoes! Full speed ahead!
20:15:29 <johnsom> So as it stands, we will roll the dice whether we get a fast host or not.
20:15:57 <sbalukoff> Hey, I'm just happy some of our hosts have it now!
20:16:03 <johnsom> Which means, it will sometimes be fast, sometimes not, and we can't expand our scenario testing (due to the timeouts on slow hosts).
20:16:04 <sbalukoff> What a difference it makes for us!
20:16:18 <johnsom> Yeah, it is huge.
20:16:49 <diltram> but like we dicussed that earlier
20:17:02 <johnsom> So, the question to the team is, should we merge this patch that will take advantage of hosts with nested virt and live with variability or not.  (I will clean it up first of course).
20:17:12 <diltram> we can split up our testing scenarios to smaller parts which ends in expected time on -kvm nodes
20:17:41 <dougwig> johnsom: hell yes we should. i don't see a downside to occasionally faster.
20:17:52 <sbalukoff> dougwig: +1
20:18:15 <diltram> especially that right now big part of tests is on OSIC cloud and there will second so big cloud with kvm available also
20:18:16 <johnsom> Cool.  That is my perspective also.  Plus it will help counter act the FUD by showing we are using it.
20:18:44 <johnsom> Any other comments from the lurkers?
20:18:45 <diltram> we need to ping people from astara about that
20:18:50 <sbalukoff> I think we're used to blazing new trails in this group. This is just another one.
20:19:00 <johnsom> Astara?
20:19:07 <diltram> L3 on Service VM
20:19:14 <dougwig> so, uhh, i can't believe i'm saying this out loud, but if a node doesn't have nested virt, and it goes offline, it'll get silently rescheduled.  *cough*.  that's likely a really bad idea.
20:19:18 <johnsom> I didn't see that message
20:20:16 <johnsom> Hahaha, I like the way you think dougwig...  Just not sure I want to get caught playing that game....
20:20:18 <diltram> hahaha
20:20:26 <diltram> lol
20:20:49 <johnsom> Wow, that would be a hugely simple solution though
20:20:50 <sbalukoff> Ugh...  my IRC client just froze for a bit. Pretty sure I missed the last half-dozen messages or so. Oh well, I'm sure it was nothing important.
20:21:01 <dougwig> you're still welcome to put the bigger scenario job on the a10 ci, which has nested virt guaranteed.
20:21:16 <johnsom> Yeah, we should do that
20:21:18 <sbalukoff> Yep!
20:21:44 <johnsom> Ok, so I will clean up this patch (I just did it for testing last night) and let you all vote on it.
20:21:53 <johnsom> Probably shortly after the meeting.
20:21:53 <sbalukoff> Cool beans.
20:22:10 <johnsom> #topic Bandit running as a linter in pep8
20:22:27 <johnsom> This one makes me cringe a bit.
20:22:27 <sbalukoff> What's the issue here?
20:22:29 <diltram> ehhhhh
20:23:00 <johnsom> Infra wanted everyone to move their bandit jobs into the pep8 linters block to save a gate job.
20:23:03 <diltram> we need to find some other security test and get back again our dedicated gate job
20:23:07 <johnsom> We did this and it works fine.
20:23:27 <sbalukoff> Ok.
20:23:37 <johnsom> However, if you run tox -e pep8 with uncommitted changes, bandit will fail on you.
20:23:43 <sbalukoff> Oh.
20:23:49 <sbalukoff> So, commit your changes first.
20:23:57 <johnsom> This is because bandit is comparing commit sets to see what we broke
20:24:06 <johnsom> Yes, but that is a pain.
20:24:24 <diltram> I just want to check that I didn't broke anything in imports
20:24:29 <Frito> would adding a new thing in tox help?
20:24:30 <johnsom> I talked with the bandit guys, they agree it is a pain.  Their solution is to wrap it with a git stash
20:24:31 <diltram> and I need to commit all code to check that
20:24:39 * Frito is completely unfamiliar with bandit
20:24:43 <sbalukoff> We could create another couple tox environments which just to pep8 and just do bandit by themselves, and just let the gate run the combined job.
20:24:53 <Frito> tox -ebandit
20:24:55 <pglass> is there an option to run bandit without doing that comparison?
20:25:00 <diltram> nooo
20:25:03 <pglass> like, just give me all and any failures?
20:25:05 <johnsom> we have -e bandit
20:25:10 <diltram> this is how bandit workd
20:25:13 <diltram> works
20:25:13 <Frito> ah, k. I'll shush :-)
20:25:18 <sbalukoff> Sure, but make maybe a pep8-manual  or sometihng.
20:25:21 <sbalukoff> something.
20:25:26 <sbalukoff> or pep8-only
20:25:43 <johnsom> I think the stash/unstash would work, it just has some downsides
20:25:46 <diltram> it's some solution
20:26:12 <diltram> we can change what tox env we're running using just "tox
20:26:13 <diltram> "
20:26:29 <sbalukoff> Yep.
20:26:31 <johnsom> I.e. it won't test the uncommitted code.   We have to do ugly things in tox.ini to enable git and mask it's environment needs (username, email)
20:26:36 <diltram> and on gate use pep8 with bandit and localy pep8-only
20:26:47 <sbalukoff> This doesn't seem hard. And the git stash stuff seems overly complicated.
20:26:53 <dougwig> can we change the test to only run bandit if there are no changes outstanding?
20:27:21 <sbalukoff> Or what dougwig suggested...
20:27:39 <johnsom> dougwig probably with some git magic.  the tox "command" isn't that smart, but I can come up with something
20:28:01 <dougwig> it's just running a line of shell inside the pep8 env, so...
20:28:03 <sbalukoff> Though, dougwig, that's likely to lead to situations where people don't understand why different tests get run based on whether local changes are committed yet.
20:28:24 <johnsom> Yeah, but I know pipes don't work in those lines, so masking the failure sucks
20:28:26 <dougwig> "warning: bandit not running due to uncommitted files"
20:29:02 <sbalukoff> dougwig: Oh sure. Be sensible.
20:29:37 <johnsom> It hit that with the image build tox.  However, I can work around it.  So, go forward with the mask the failure with a warning message?
20:29:57 <johnsom> I.e. mask it or skip the test
20:30:20 <sbalukoff> johnsom: Yep.
20:30:35 <sbalukoff> johnsom: That sounds good. Or again, create a new tox environment. Either one is fine by me.
20:31:08 <johnsom> I like pep8 just working like it does on other projects, so I lean towards this approach.
20:31:29 <johnsom> Ok, I will put a patch up for that too.  Thanks for the input.
20:31:42 <johnsom> #topic Open Discussion
20:31:43 <sbalukoff> Yep. I also don't like the idea of our tox environments being special snowflakes.
20:32:03 <johnsom> nmagnezi I think you had a few topics
20:32:26 <nmagnezi> johnsom yup
20:32:29 <nmagnezi> hi all
20:32:39 <nmagnezi> So sbalukoff already mentioned https://review.openstack.org/#/c/366690 (which still need a second +2 btw)
20:32:50 <nmagnezi> I just wanted to know if now that is unblocking https://review.openstack.org/#/c/299998/ which was mentioned in last week's agenda
20:33:21 <sbalukoff> nmagnezi: Yes, if you update the second patch there to have a depends-on the first one, I'll +2 it as well.
20:33:48 <nmagnezi> sbalukoff, sorry for the possibly silly question, but how do I do that?
20:34:18 <johnsom> dougwig would https://review.openstack.org/#/c/299998/ be in consideration now?  It feels like a feature
20:34:25 <sbalukoff> Add a line to the commit message that has Depends-On: <change_id_of_other_patch>
20:34:52 <dougwig> johnsom: that feels like ocata to me, at this point.
20:34:55 <johnsom> #link http://docs.openstack.org/infra/manual/developers.html#cross-repository-dependencies
20:35:31 <nmagnezi> dougwig, johnsom, i worked really hard to make it to this version. if it's doable that would really be great.
20:35:32 <sbalukoff> Ok, so... I'll +2 it, but -A it with a note that we want it to get in after we've cut the Newton release.
20:35:56 <nmagnezi> it is in the review queue for some time
20:36:04 <sbalukoff> That is true.
20:36:49 <nmagnezi> i even join here at 23:30 local timer that one ;)
20:36:52 <johnsom> It has been in the queue for a while, but I think most of that is from outstanding comments.
20:36:54 <nmagnezi> time*
20:38:20 <nmagnezi> johnsom, well, the couple last commit are not that recent. but of course it's your call. I'm just asking nicely :)
20:39:06 <johnsom> I understand the concern, with the feature freeze I think we missed newton on this one.
20:39:28 <nmagnezi> :<
20:39:50 <nmagnezi> sorry to hear that
20:40:02 <johnsom> Sorry to make that call too.
20:40:16 <nmagnezi> also, i wanted to raise awareness about two more things
20:40:25 <johnsom> Please do
20:40:59 <nmagnezi> One is https://review.openstack.org/#/c/344658/ which was also mentioned last week. I added more unit tests tests as requested. Just want to make sure we can merge it before the cut (or after, it's that is under the same logic).
20:41:12 <nmagnezi> The second this is that I filed a CVE against Octavia two days ago, but it got zero response. did anyone noticed that CVE?
20:41:50 <dougwig> bug number?
20:41:50 <nmagnezi> I can share the launchpad number privately  or publicly (if it is allowed since the access is blocked for unauthorized ppl)
20:42:04 <johnsom> With the process monitor, I think it is the same issue with the feature freeze.
20:42:05 <dougwig> yes, only the security team can view it.
20:42:18 <nmagnezi> dougwig, 1620629
20:42:34 <johnsom> No one reached out to me about the CVE.
20:42:43 <sbalukoff> Nor to me.
20:42:46 <dougwig> is that against octavia or neutron?
20:43:27 <nmagnezi> dougwig, octavia
20:43:40 <dougwig> i bet it has an empty security list.  can you add neutron to the bug, so i can see it?
20:43:55 <dougwig> no one was notified, i bet.
20:44:36 <nmagnezi> don't project maintainers get notified automatically ?
20:45:02 <dougwig> no, the security team is different.
20:45:03 <johnsom> Not necessarily for the security issues.  There is a special process around it.
20:46:06 <nmagnezi> johnsom, I was not aware of that. that explains why none of you address this. can you access it?
20:46:16 <sbalukoff> I can't.
20:46:16 <diltram> johnsom: do we have properly configured security group on launchpad?
20:46:26 <dougwig> nmagnezi: did you add neutron as an affected project yet?  that'll give me access.
20:46:40 <johnsom> No, but if you add neutron we can get at it.
20:46:41 <nmagnezi> dougwig, i'll do that right away
20:47:11 <nmagnezi> can i add neutron without removing octavia?
20:47:12 <dougwig> diltram: good question.  let me find out
20:47:16 <dougwig> nmagnezi: yes
20:48:03 <dougwig> #action dougwig to get cve list populated for octavia
20:48:04 <johnsom> There was discussion of this previously with the security team.  I thought it was setup to go to the neturon group and thus how we got the last one
20:48:36 <nmagnezi> dougwig, it replaced it. sorry. can you please re-add octavia now? not sure why it does not allow me to add two projects
20:48:44 <diltram> our whole launchpad is unconfigured like it was even with reviews
20:49:40 <johnsom> It's a bit strange being neutron and not....
20:50:38 <diltram> but our launchpads are not connected in any way :P
20:51:12 <sbalukoff> Ok, I'm able to read the CVE now. And yes, that's an issue.
20:51:18 <johnsom> Yep
20:51:52 <johnsom> I can take that and get a fix going.
20:51:58 <sbalukoff> Cool beans.
20:52:24 <nmagnezi> sbalukoff, added the depends on to the auto reschedule bug
20:52:32 <johnsom> Fun.  Ok, any other topics today?  We have about 8 minutes.
20:52:32 <sbalukoff> nmagnezi: Sounds good.
20:52:36 <dougwig> thanks for bringing this up nmagnezi. this was a case of no news being no one had seen it.
20:52:52 <sbalukoff> Yes, indeed!
20:53:09 <nmagnezi> dougwig, happy to help :-)
20:53:51 <diltram> about keystone, it's feature or bug fix?
20:53:54 <sbalukoff> nmagnezi: Ok, the <change_id> in that Depends-On: tag should be the change ID of the other patch, not a specific patch number. The change-id of the other patch is found in the other patch's commit message.
20:54:19 <diltram> #link https://review.openstack.org/#/c/364655/
20:54:21 <johnsom> diltram That is definitely a feature and for Ocata.
20:54:22 <diltram> here is review for it
20:54:23 <nmagnezi> sbalukoff, hah, sec lemme fix that right now
20:54:29 <sbalukoff> nmagnezi: Ok!
20:54:42 <diltram> ok, I had the same feelings :P
20:55:10 <sbalukoff> Are we just going to mark the Octa patches with a -A for now (plus note as to why the -A is there)?
20:55:15 <diltram> but sbalukoff you're not laying any more, we're secure and we're good to go to validate requests with keystone :P
20:55:38 <sbalukoff> diltram: Awesome! I'll poke at it again later this week.
20:55:46 <diltram> great :)
20:56:18 <diltram> I know, one more thing
20:56:18 <johnsom> Sbalukoff We can do that if we want.  Otherwise it's on the cores to not go merge feature stuff until we cut the branch
20:56:25 <nmagnezi> sbalukoff, Depends-On: I652ab029b7427c8783e4b2f0443a89ee884bf064   --> like that?
20:56:33 <diltram> we're starting to use release notes?
20:56:38 <sbalukoff> Ok. In any case, a note should be left saying why there isn't a +A yet.
20:56:41 <johnsom> Yes, we use release notes
20:56:56 <sbalukoff> Yep! Technically we started that with Mitaka. :D
20:56:59 <diltram> ok, so I need to update that patch sets
20:57:04 <diltram> ok :P
20:57:07 <nmagnezi> johnsom, this is prolly also for Ocala, but what about this refactor? https://review.openstack.org/#/c/346728/
20:57:08 <sbalukoff> And we're being better about actually doing it now. :D
20:57:24 <nmagnezi> Ocata*
20:57:42 <diltram> nmagnezi: I was trying to test but I didn't had time for that
20:57:48 <johnsom> nmagnezi Yes, Ocata.
20:58:15 <johnsom> Ok, I can tag the Octavia patches to clarify.  Someone want to take n-lbaas?
20:58:19 <sbalukoff> nmagnezi: Could you also add a release note for that patch, too?
20:58:33 <nmagnezi> sbalukoff, which one?
20:58:37 <sbalukoff> Er.. this patch, I mean: https://review.openstack.org/#/c/299998
20:59:05 <nmagnezi> sbalukoff, sure can. btw is my formatting correct ^^ ?
20:59:20 <nmagnezi> diltram, much appreciated!
20:59:52 <sbalukoff> nmagnezi: Er... I don't see a new patch set since 32.
20:59:58 <sbalukoff> And it wasn't correct in 32.
21:00:02 <johnsom> Ok, out of time folks.
21:00:07 <johnsom> #endmeeting