16:00:29 <smcginnis> #startmeeting Cinder
16:00:33 <openstack> Meeting started Wed Dec  7 16:00:29 2016 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:35 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:37 <openstack> The meeting name has been set to 'cinder'
16:00:40 <bswartz> .o/
16:00:47 <smcginnis> ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex patrickeast dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell stevemar watanabe.isao,tommylike.hu mdovgal ildikov
16:00:50 <Cibo> hello
16:00:50 <scottda> hi
16:00:51 <erlon-airlong> hey
16:00:52 <tommylikehu_> hello
16:00:53 <xyang1> hi
16:00:53 <mdovgal> Hello
16:00:58 <viks> hi all
16:01:01 <mtanino> hi/
16:01:03 <jungleboyj> o/
16:01:14 <bswartz> smcginnis:biggest ping ever!
16:01:28 <smcginnis> :)
16:01:35 <smcginnis> #topic Announcements
16:01:38 <e0ne> hi
16:01:53 <smcginnis> O-2 is next week.
16:02:08 <smcginnis> Let's make sure we get feedback to the new drivers. There are a few that are very close.
16:02:11 <smcginnis> And a few that are not.
16:02:14 * DuncanT wakes up
16:02:25 <smcginnis> DuncanT: Isn't it the afternoon there?
16:02:35 <DuncanT> smcginnis: Yup
16:02:38 <smcginnis> :)
16:02:39 <e0ne> :)
16:02:54 <smcginnis> #link https://etherpad.openstack.org/p/cinder-spec-review-tracking Review focus
16:03:14 <smcginnis> Also have a few things we should get in ASAP from our priority list there. ^
16:03:28 <smcginnis> So we can have as much run time as possible before the freeze.
16:03:32 <mars> hi
16:03:35 <tbarron> hi
16:03:39 <markstur> hi
16:03:52 <erlon-airlong> smcginnis: Im missing reviews in the NFS Snapshot
16:03:52 <smcginnis> OK, that's all I have for announcements I guess. Let's move on.
16:04:11 <smcginnis> erlon-airlong: OK, I'll get that on my list. Would like to see that move along.
16:04:23 <smcginnis> #topic Running Bandit in gate as non-voting
16:04:27 <smcginnis> xyang1: Hey
16:04:30 <erlon-airlong> jungleboyj: Jay if you can put that in yours too
16:04:38 <xyang1> jessegler: hi, you want to talk about this?
16:04:44 <jessegler> Yeah!
16:04:55 <xyang1> jessegler: go ahead:)
16:05:06 <erlon-airlong> smcginnis: thanks!
16:05:11 <jessegler> So a few of us were wondering why Bandit isn't running as part of the gate?
16:05:31 <jessegler> I heard people had maybe tried that before, but I'm not sure what became of that effort?
16:05:32 <xyang1> #link https://wiki.openstack.org/wiki/Security/Projects/Bandit
16:05:37 <smcginnis> jessegler: Last time I looked at it (which I admit is a while now) there were just way too many false alerts.
16:05:56 <DuncanT> Yeah, it was full of noise and no useful signal at all
16:06:07 <e0ne> xyang1: in general, I like this idea
16:06:09 <smcginnis> I love the concept, but it needs some work before it's anything but noise at this point.
16:06:18 <e0ne> xyang1: but now it reports too many errors
16:06:36 <xyang1> jessegler: have you got feedback from  other projects who run it in the gate?
16:06:48 <xyang1> jessegler: is it reliably for Keystone?
16:06:57 <xyang1> xyang1::(
16:07:01 <jessegler> I know keystone does, I can follow up with them
16:07:07 <xyang1> e0ne: :(
16:07:12 <tbarron> in barcelona security meeting on vulnerability assessment bandit was recommended, have we given them this feedback?
16:07:30 <e0ne> Total issues (by confidence)		High: 149
16:07:39 <erlon-airlong> xyang1: what storage BE it uses?
16:07:40 <e0ne> Total issues (by severity):  High: 21
16:07:51 <e0ne> it doesn't looks good
16:08:04 <dstanek> yeah, i believe we run it as a gate check in keystone
16:08:08 <e0ne> can we configure it and/or fix at least high priority errors?
16:08:13 <xyang1> erlon-airlong: tox -e bandit
16:08:33 <xyang1> e0ne: you mean ignore medium errors?
16:08:35 <smcginnis> We should review them again in Cinder. But I tried to clean up the errors in the past and they were all false positives.
16:08:49 <dstanek> keystone runs bandit as a part of its pep8 checks
16:08:51 <erlon-airlong> xyang1: hmm, ok, it does not run the full devstack harnness
16:09:04 <e0ne> xyang1: I mean we have to fix high priority first
16:09:07 <jgriffith_away> smcginnis: +1 false positives was the problem I had
16:09:14 <xyang1> e0ne: sure
16:09:22 <jgriffith_away> I'm not away
16:09:30 <smcginnis> jgriffith_away: Sure you are.
16:09:33 <smcginnis> :)
16:09:46 <jgriffith> :)
16:09:46 <e0ne> jgriffith_away: are you sure? :)
16:09:54 <xyang1> :)
16:10:25 <jgriffith> jessegler: have you receieved any feedback from others about false positives?
16:11:35 <jessegler> I haven't had the chance to talk to too many people about it
16:11:42 <xyang1> jessegler: I'll say get some feedback from other teams such as Keystone.  ask if they have problems with false positives
16:11:48 <Swanson> hi
16:11:58 <erlon-airlong> Swanson: late
16:12:07 <erlon-airlong> Swanson: :)
16:12:17 <jessegler> Will do!
16:12:27 <xyang1> jessegler: great, thanks
16:12:51 <smcginnis> jessegler: I would love to get to the point where we can use that and rely on the results to catch issues.
16:13:08 <erlon-airlong> smcginnis: xyang1: can I get a ride in the jobs ' topic?
16:13:12 <xyang1> jessegler: also can you bring this up at the security meeting? ask if they hear about false positives?
16:13:12 <jessegler> Yeah, me too
16:13:28 <jessegler> Yeah
16:13:32 <xyang1> erlon-airlong: go ahead
16:14:01 <erlon-airlong> I was planning and dulek mentioned about make the lvm-multibackend job as -nv
16:14:18 <erlon-airlong> it is pretty stable in the runs I did
16:14:27 <xyang1> jessegler: are you all set?
16:14:34 <smcginnis> erlon-airlong: If it's pretty stable, that would be good.
16:14:47 <erlon-airlong> and will test the basic migration cases as the migration tests land
16:14:58 <erlon-airlong> smcginnis: nive, Ill send a patch later
16:15:06 <jessegler> Yep
16:15:15 <e0ne> what does 'it its pretty stable' mean? do we have any numbers?
16:15:25 <erlon-airlong> nice
16:15:31 <xyang1> jessegler: great!  once you get more feedback from others, we can talk here with the cinder team again
16:15:42 <smcginnis> xyang1, jessegler: +1
16:15:50 <xyang1> smcginnis: thanks
16:15:52 <erlon-airlong> e0ne:, no, it only runs on requests, but I never saw it failing for reasons that it shoudnt fail
16:16:08 <erlon-airlong> e0ne: so, putin it as -nv we will be able to know
16:16:23 <jessegler> thanks!
16:16:37 <smcginnis> jessegler: Thanks for raising the topic. xyang1: you too.
16:16:45 <e0ne> erlon-airlong: :(
16:17:01 <e0ne> erlon-airlong: can we get any stats from infra?
16:17:01 <smcginnis> erlon-airlong: NV should be safe. And if we find it does have too many issues, easy enough to revert.
16:17:11 <scottda> erlon-airlong: +1 I say make in non-voting so we can see results.
16:17:17 <smcginnis> erlon-airlong: But if you're feeling confident about having it run always, then I think it might be a good time.
16:17:33 <erlon-airlong> e0ne: ? I can see if we can, but I don't belive there will be too many runs
16:18:04 <erlon-airlong> smcginnis: Im pretty confident that it will be stable
16:18:26 <smcginnis> +1 then
16:18:36 <erlon-airlong> smcginnis: it will also help to prevent bugs like this: https://review.openstack.org/#/c/407089/
16:19:08 <smcginnis> erlon-airlong: I would still prefer to actually understand that bug and the failure and get it properly fixed rather than reverted.
16:19:17 <e0ne> erlon-airlong: http://grafana.openstack.org/ could help
16:19:31 <scottda> smcginnis: +1 I hope to look at that today
16:19:33 <erlon-airlong> smcginnis: sure, scottda would have a deep look into it,
16:19:40 <erlon-airlong> scottda: hmm, great!
16:19:40 <smcginnis> scottda: Awesome, thanks!
16:19:50 <erlon-airlong> e0ne: thanks
16:19:53 <smcginnis> #topic Open
16:19:58 <mars> Hello everyone´╝îI'd like to consult some questions to you.
16:20:00 <erlon-airlong> o\
16:20:00 <smcginnis> Anything else?
16:20:03 <scottda> tbarron: If you are around, there's a thought that a patch of yours might be involved...
16:20:04 <erlon-airlong> \o
16:20:05 <smcginnis> mars: Sure, what's up?
16:20:09 <mars> I'm interested in a blueprint https://blueprints.launchpad.net/cinder/+spec/in-use-volume-migrate-rbd. I want to consult you whether I can do this bp or to say whether community support this bp ? And whether it is worth to do, because as far as I know  ceph can be accessed directly through the network , and this bp is aim to convert network access to access from device path which ceph has mounted ? and I want to hear your comments or suggestions, tha
16:20:10 <mars> nk you!
16:20:20 <erlon-airlong> me again after this one :)
16:20:38 <e0ne> erlon-airlong: I'm sorry, here is a correct link: http://graphite.openstack.org/
16:20:49 <erlon-airlong> e0ne: ok
16:20:49 <mdovgal> yes. i want to discuss this patch https://review.openstack.org/#/c/399003/
16:20:54 <tbarron> scottda: just revert it (my old patch) :)
16:21:01 <smcginnis> eharney: Thoughts on that one?
16:21:05 <mars> smcginnis:can we do this bp?
16:21:16 <mars> or to say,can I
16:21:36 <eharney> oops, i had fallen off of irc...
16:21:40 <viks> I would like to update on https://review.openstack.org/#/c/382097/
16:21:40 <smcginnis> mars: Reading through it now. Would like some input from RH.
16:21:45 <mdovgal> we have different opinions about it in comments
16:21:48 <smcginnis> eharney: :)
16:22:02 * diablo_rojo sneaks in the back of the room
16:22:06 <eharney> which thing?
16:22:12 <smcginnis> eharney: https://blueprints.launchpad.net/cinder/+spec/in-use-volume-migrate-rbd
16:23:23 <eharney> not sure at the moment, will have to run that one by jbernard
16:24:07 <smcginnis> mars: I think you can probably start work on it. I don't see any obvious reason why we wouldn't want it at least at some point.
16:24:31 <mars> ok ,I got it,thank you
16:24:35 <smcginnis> mars: Thanks
16:24:41 <smcginnis> erlon-airlong: You had one next I believe?
16:24:58 <erlon-airlong> smcginnis: I have put some patches for the multimatrix tests
16:25:20 <erlon-airlong> smcginnis: there is one in cinder that would be good to have more people looking at
16:25:21 <smcginnis> erlon-airlong: Links?
16:25:34 <erlon-airlong> smcginnis: https://review.openstack.org/#/c/381736/
16:25:49 <erlon-airlong> smcginnis: https://review.openstack.org/#/c/381737/
16:25:55 <e0ne> # linke https://review.openstack.org/#/c/381736/
16:25:57 <e0ne> # link https://review.openstack.org/#/c/381736/
16:25:57 <smcginnis> erlon-airlong: Oh nice, I missed that one. Yeah, those would be good.
16:26:00 <e0ne> # link https://review.openstack.org/#/c/381737/
16:26:02 <smcginnis> I'll add it to my queue.
16:26:03 <e0ne> #link https://review.openstack.org/#/c/381737/
16:26:10 <erlon-airlong> smcginnis: https://review.openstack.org/#/c/375103/
16:26:29 <erlon-airlong> smcginnis: this last one is on devstack-nfs-plugin
16:26:37 <smcginnis> #link https://review.openstack.org/#/c/381736/
16:26:42 <e0ne> erlon-airlong: you need to rebase https://review.openstack.org/#/c/381737/
16:26:50 <e0ne> smcginnis: thanks:)
16:26:54 <smcginnis> erlon-airlong: OK, hopefully we can get some eyes on it soon.
16:27:02 <smcginnis> e0ne: Thank you! :)
16:27:25 <smcginnis> OK, mdovgal, I think you were next.
16:27:29 <erlon-airlong> smcginnis: scottda here is a working run of the job (I teeked devstack-gate to run like it was the job it its is possible to see it working): http://logs.openstack.org/92/396592/7/check/gate-tempest-dsvm-neutron-full-ubuntu-xenial/ad245d7/console.html.gz
16:27:35 <mdovgal> smcginnis, tnx)
16:27:40 <erlon-airlong> e0ne: ok, Ill
16:27:45 <erlon-airlong> smcginnis: thanks
16:28:22 <erlon-airlong> smcginnis: for me thats all
16:28:38 <smcginnis> erlon-airlong: Thank you
16:29:06 <mars> smcginnis:This bp is derived from a contributor named Ji.Wei https://blueprints.launchpad.net/cinder/+spec/in-use-volume-migrate-to-ceph , but over the past six months his status is still not started, so I contact him and hope him can assign it to me , but I haven't got any response from him so I resubmit this blueprint and assign to myself,so if that ok I will submit a template about it as soon as possible.
16:29:17 <mdovgal> in comments we have different opinions about changing log level for volume attach/detach operation
16:29:31 <e0ne> our operators wants this patch now because there is no way to find what went wrong in nova-cinder attach flow w/o debug mode:(
16:30:04 <e0ne> we have already all needed data in INFO level from nova side but cinder's part is missed
16:30:29 <e0ne> #link https://review.openstack.org/#/c/399003/
16:30:39 <smcginnis> mdovgal: Should be easy enough to put more detailed logging in the error handling.
16:31:16 <e0ne> smcginnis: the problem is: we do not know where error could happen: on nova's side or cinder
16:31:53 <e0ne> smcginnis: so full logs well help operators to find issues using something like kibana/elasticsearch
16:31:56 <smcginnis> e0ne: Ah, so the operation could fail, but it didn't fail on the Cinder side so there wouldn't be any clues unless this is done at info instead of debug. That right?
16:32:12 <e0ne> smcginnis: correct
16:32:33 <smcginnis> e0ne, mdovgal: OK, thanks. That makes more sense then.
16:32:43 <smcginnis> mdovgal: It needs a rebase now unfortunately.
16:32:51 <e0ne> smcginnis, mdovgal: great
16:32:59 <mdovgal> smcginnis, yes, of course
16:33:00 <smcginnis> mdovgal: If you can get that rebased and everything passing again, I think you've convinced me.
16:33:10 <smcginnis> :)
16:33:15 <e0ne> mdovgal: please, add a link to meeting discussion to review or a commit message
16:33:17 <mdovgal> i will do it as soon as possible. thanks
16:33:38 <smcginnis> mdovgal: Thanks for explaining it.
16:33:39 <mdovgal> e0ne, ok
16:33:44 <e0ne> mdovgal: IMO, link in comments to patch would be enough
16:33:47 <smcginnis> Anyone else?
16:33:51 <mdovgal> e0ne, i'll add it
16:33:54 <viks> hi https://blueprints.launchpad.net/cinder/+spec/veritas-hyperscale-cinder-driver
16:34:01 <e0ne> smcginnis, mdovgal: thanks!
16:34:52 <e0ne> viks: hi
16:35:32 <e0ne> viks: what do you want to discuss according to your driver?
16:35:42 <e0ne> viks: I don't see CI results for the patch:(
16:35:47 <viks> Just wanted to give an update
16:35:54 <viks> yes we are working on CI environment
16:36:04 <viks> we were able to post patched on ci sandbox
16:36:22 <stevemar> smcginnis: o/ if you have room, i'd like to give a quick OSC update
16:36:22 <viks> and making necessary changes for cinder environment now
16:36:35 <stevemar> happy to wait til the agenda topics are over
16:36:52 <smcginnis> stevemar: OK, you're next.
16:37:43 <smcginnis> viks: We're trying to get driver reviews some priority. Not much time left, but if you can get review comments addressed and get the CI reporting very soon, you still have a little time left.
16:38:08 <viks> yes we are trying to get it done on time
16:38:13 <viks> jsut wanted to update here
16:38:15 <viks> thanks
16:38:28 <smcginnis> viks: OK, thanks. Good luck.
16:38:54 <smcginnis> stevemar: OK, you're up. Going to tell us that osc is all complete for cinder? :P
16:39:11 <stevemar> smcginnis: ha, almost!
16:39:15 <stevemar> that's the update :)
16:39:22 <smcginnis> :)
16:39:31 <stevemar> for v1 commands, we're about 90-95% at parity
16:39:43 <stevemar> for v2 we're at 80% parity
16:39:54 <smcginnis> stevemar: Got the spreadsheet link handy? Might be good for folks to see where things stand.
16:39:59 <stevemar> with patches in flight for the remaining work
16:40:00 <smcginnis> stevemar: Really good progress!
16:40:05 <e0ne> stevemar: what about v3 and microversions?
16:40:11 <stevemar> https://docs.google.com/spreadsheets/d/18ZtWC75BNCwFqLfFpCGGJ9uPVBvUXX0xuXP1yYG0NDA/edit?usp=sharing
16:40:31 <smcginnis> #link https://docs.google.com/spreadsheets/d/18ZtWC75BNCwFqLfFpCGGJ9uPVBvUXX0xuXP1yYG0NDA/edit OSC Cinder command implementation progress
16:40:40 <stevemar> e0ne: catching up is the name of our game, so haven't tackled that yet
16:41:06 <smcginnis> v3 should be easy once v2 is done. It's adding microversion stuff that will take more effort.
16:41:06 <stevemar> the last big hurdle is the encryption type stuff
16:41:37 <e0ne> stevemar: why can't we just implement in cinder client as a plugin and make it required?
16:41:39 <erlon-airlong> stevemar: I have always encouraging folks that adds new features in Cinder to add the OSC in their roadmap too
16:41:41 <scottda> stevemar: Yeah, it seems like OSC needs a generic microversion solution. Seems like it could be tricky, since all services are a bit different.
16:42:06 <stevemar> e0ne smcginnis TBH i haven't scoped out the work needed for microversions and v3 yet -- trying to catch up to a moving target here
16:42:13 <stevemar> scottda: correct
16:42:38 <tbarron> from my POV the two big issues were (1) microversion negotiation support, and (2) running cinder/manila/whatever as standalone service.
16:42:46 <tbarron> is #2 solved?
16:42:58 <e0ne> tbarron: good point
16:43:05 <stevemar> tbarron: nope
16:43:08 <e0ne> :(
16:43:12 <smcginnis> tbarron: Oh, you mean like the brick plugin stuff?
16:43:16 <tbarron> I just mean, being able to have a stanalone cli that shares with osc
16:43:28 <stevemar> tbarron: it would have to be a plugin
16:43:30 <e0ne> so, we will live with cinderclient cli and brickclient-ext for a while
16:43:44 <tbarron> smcginnis: stevemar yeah, some kind of plugin infra for osc
16:43:59 <stevemar> this is just a heads up about v1 and v2 parity support, that was our initial goal
16:44:02 <smcginnis> e0ne: Yeah, until there is a way to support that, cinderclient CLI can't go away, IMO.
16:44:10 <e0ne> smcginnis: +2
16:44:15 <stevemar> microversions, v3, and manila standalone were out of scope when we created OSC
16:44:32 <smcginnis> stevemar: Great progress. Certainly more work to do, but just having this base is necessary and a good sign.
16:44:34 <bswartz> stevemar: don't forget cinder standalone
16:44:47 <tbarron> stevemar: yeah, i understand, and I'm an OSC fan, just being pragmatic about how to close the deal
16:44:59 <stevemar> tbarron: close when we're ready :)
16:45:25 <stevemar> tbarron: not all projects will close up at the same time, neutron deprecated their cli \o/
16:45:41 <stevemar> just something to keep in your back pocket
16:45:59 <tbarron> stevemar: great progress with keystone and neutron, I don't mean to be detracting
16:46:05 <xyang1> stevemar: so neutron use OSC only?
16:46:31 <stevemar> xyang1: the neutron CLI is still alive, but they issue a deprecation warning when any command is used
16:46:55 <xyang1> stevemar: do you know when neutron CLI will be removed?
16:46:59 <e0ne> xyang1: keystone deprecated their CLI in flavor of OSC
16:47:16 <stevemar> xyang1: something like this: http://paste.openstack.org/show/591683/
16:47:17 <bswartz> keystone did it LONG ago
16:47:32 <stevemar> keystone deprecated it *long* ago and removed it last release
16:47:43 <e0ne> is keystone only one project who removed CLI from their client/
16:47:53 <xyang1> ok, thanks
16:47:53 <stevemar> xyang1: it'll (neutron) probably stick around for a year -- i don't know the details off hand
16:48:15 <stevemar> e0ne: we're slowly picking targets where we are close
16:48:28 <stevemar> which is why i keep bringing it up here  :D
16:48:49 <smcginnis> stevemar: Communication is key. ;)
16:49:22 <smcginnis> OK, anything else today?
16:49:23 <stevemar> anyway, just a status update that we're close with v1 and v2 -- good to know about the other stuff, we'll get to that as soon as the v1 and v2 work is complete (shouldn't be long now)
16:49:37 <smcginnis> stevemar: Thanks! And nice work.
16:49:52 <stevemar> comment on the google doc if you have questions
16:49:56 <jungleboyj> stevemar ++
16:50:06 * stevemar lunches early
16:51:38 <smcginnis> Alright, if nothing else I think we can end a little early.
16:51:43 <smcginnis> Last call for any other topics.
16:52:43 <erlon-airlong> stevemar: thanks!
16:52:54 <e0ne> I would like to ask review for #link https://review.openstack.org/381547
16:53:08 <smcginnis> e0ne: Tabbed
16:53:17 <e0ne> It will increase our cinderclient functional tests coverage
16:53:31 <smcginnis> OK, thanks everyone. I guess that's it for today.
16:53:37 <jungleboyj> Thanks.
16:53:43 <smcginnis> #endmeeting