16:00:03 #startmeeting Cinder 16:00:03 Meeting started Wed Mar 15 16:00:03 2017 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:06 The meeting name has been set to 'cinder' 16:00:10 hello 16:00:10 #chair jungleboyj 16:00:11 Current chairs: jungleboyj smcginnis 16:00:15 jungleboyj: Just in case ^^ 16:00:22 smcginnis: Thanks. :-) 16:00:45 hi 16:00:47 hi 16:00:48 <_alastor_> o/ 16:01:01 Oh good, was afraid everyone missed the time change. 16:01:02 just in case of what? 16:01:02 hi 16:01:08 hi 16:01:11 hi! 16:01:12 bswartz: In case I get dropped for some reason. 16:01:17 o/ 16:01:23 bswartz: Not in my usual hemisphere. 16:01:24 smcginnis: no ping today? ;-) 16:01:28 ah 16:01:30 Hello :) 16:01:31 geguileo: Oh right! :) 16:01:35 o/ 16:01:36 smcginnis: AFAIK, only US switched to DST time now 16:01:43 ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex karthikp_ patrickeast dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell watanabe.isao,tommylikehu mdovgal ildikov 16:01:50 wxy viks ketonne abishop sivn breitz 16:01:56 e0ne: Yeah. When does Europe? 16:01:57 o/ 16:02:13 \o 16:02:14 the last weekend of March in Ukraine 16:02:29 e0ne: Good to be aware of that, thanks! 16:02:40 Hi 16:02:40 #topic Announcements 16:02:41 Same here I believe 16:02:47 DuncanT! 16:02:52 hello 16:02:56 DuncanT: Nice to see you. :) 16:03:02 whoa 16:03:10 DuncanT: !!!!! 16:03:12 Thanks :-) 16:03:22 Just starting to get back into things 16:03:29 * smcginnis pictures DuncanT walking into the Cheers bar. 16:03:47 smcginnis: ++ 16:03:47 #link https://etherpad.openstack.org/p/cinder-spec-review-tracking Review focus 16:03:55 We are four weeks out from milestone 1. 16:04:01 DuncanT: Glad to see it. There wasn't enough arguing without you. ;-) 16:04:11 No major Cinder deadlines for P-1, but still a good checkpoint. 16:04:27 It would be really nice if all of the new driver submissions have had a once over by then. 16:04:45 So we're not pointing out spelling errors and the like right before the actual driver deadline. 16:05:29 #topic Tracking driver requirement 16:05:36 eharney: All yours. 16:05:50 well, i added driver-requirements.txt 16:06:06 the aim is for drivers to add their "optional" dependencies there (optional for Cinder, but needed for the driver to work) 16:06:34 #link https://review.openstack.org/443761 Driver dependencies patch 16:06:39 eharney: is it something like requirements.txt? 16:06:45 this started because of difficulties figuring out whether we packaged all of the right things in RDO etc, so should be useful for any downstreams 16:07:00 eharney: I've checked for ceph - there is no rbd and rados packages in pypi:( 16:07:00 e0ne: it's very much like it, it's just not managed by any tools 16:07:03 eharney: So no automated enforcement or anything like that now, just a convention for us to capture these hidden dependencies, right? 16:07:04 e0ne: correct 16:07:32 eharney: I like the general idea 16:07:32 eharney, so these are pypi requirements to use a particular driver ? 16:07:34 this may well end up being managed by our requirements tools etc at some point, but i haven't really figured out what that looks like, so it's just useful documentation for now 16:07:42 hemna: yes 16:07:44 ok 16:08:06 does it make any sense to create a const in the driver class that has this info? 16:08:12 and then the tools generate this file ? 16:08:14 eharney: But we can also capture non-pypi in there too? 16:08:25 as well as the generate_driver_list.py can generate it too ? 16:08:26 hemna: Ooh, I kind of like that. 16:08:34 * smcginnis likes self documenting automation 16:08:38 kinda like the CI_WIKI_NAME thingy 16:08:38 i think we should determine if/how we will integrate with requirements tools before we integrate it too deeply into cinder's code 16:08:48 eharney: Fair 16:08:48 eharney: +1 16:08:58 but certainly sounds like a useful thing to consider 16:09:11 well it wouldn't be much different than the hard coded driver-requirements.txt except that that file gets generated 16:09:24 $0.02 16:09:47 smcginnis: how non-pypi things work is still somewhat of an open question 16:09:51 eharney, thanks for starting this 16:10:22 there's also another area of non-python things (CLIs), not too sure there yet either, but, this seems like a starting point 16:10:35 eharney: That may be harder to automate, but I think it might be even more useful since I know when I tried to find all of it, it really wasn't obvious where some of this came from. 16:10:45 maybe a driver-bindep.txt 16:10:48 I think just pypi is a great start. 16:11:28 eharney: Anything else we should discuss on that now? Or just an awareness thing at this point? 16:11:33 nothing else from me 16:11:39 eharney: Cool, thanks! 16:11:42 #topic Revisiting adding a Bandit Gate 16:11:52 jonesn, rarora: Hi 16:11:57 https://wiki.openstack.org/wiki/Security/Projects/Bandit#Bandit_Baseline_Gate 16:12:05 hi 16:12:21 #link https://wiki.openstack.org/wiki/Security/Projects/Bandit#Bandit_Baseline_Gate Bandit Baseline 16:12:33 Short story: Having a bandit gate that only checks for added issues will be a fairly small change to the tox.ini and zuul configuration to setup the gate itself. 16:12:51 jonesn, rarora: do we have a fresh report for cinder somewhere? 16:12:58 jonesn: So kind of like what we do with pylint now. We just don't want the number to go up unnoticed. 16:13:22 Exactly like pylint, except we don't have to write the script for it. 16:13:35 e0ne: not off hand 16:14:41 jonesn: I'm ok with this job if there will be a little number of false positive errors 16:15:00 so basically you can just run the bandit-baseline command and it will do a diff... the only issues that will pop up are new ones 16:15:11 There were a set of patches to disable warnings on false positives. Are those still out there? 16:15:11 with medium confidence and severity there should not be many 16:16:27 e0ne: We might want to look through the list of things bandit checks for and exclude some entirely 16:16:34 smcginnis: not sure of which patches you're talking about but we can also set up a config to disable certain bugs altogether and people can always #nosec something and leave a comment if they know it isn't an issue 16:16:34 smcginnis: I haven't seen any outstanding patches, but there are probably more to fix 16:16:41 experience with the pylint job has shown that this kind of thing is useful, but really requires some particularly interested people to keep an eye on it 16:17:19 eharney: rarora, jessegler and I would be pretty dedicated to checking the failures 16:17:38 eharney: good point 16:17:39 In fact I was going to ask if there was a way to be notified if a particular gate fails. 16:18:15 jonesn: Not that I know of, but a spot check from time to time should be good at a minimum. 16:18:34 Our only challenge with the way pylint works is since it's non-voting, it tends to get ignored. 16:18:56 But I think usually it doesn't go too long before someone (usually eharney) noticed and complains. :) 16:19:20 smcginnis: I'd strongly advocate using this as a trial period, with the goal of moving to a voting gate. 16:19:25 we were planning on doing non-voting at least for now until we can fine tune the bugs, and it would at least be a step in the right direction 16:19:30 So I guess I agree with e0ne - as long as there aren't too many false positives, I think it's good. 16:19:49 jonesn: I'd really want to be more comfortable with it before we make it voting. 16:20:11 Especially since out of the things flagged that I looked in to, not one was an actual security issue. 16:20:31 But I don't want that one real instance to slip through either. 16:20:32 yeah... most of the current things have resulted in adding #nosec from what i've seen 16:20:54 So I'm good for now as long as it's nv. 16:21:05 Could we get some recommendations on which rules to turn off? 16:21:23 Not that I know of off hand, but there may be some things. 16:22:21 Having it non-voting should allow us to do some statistics on which rules end up getting #nosec'd a lot and that might inform which to turn off. 16:22:32 +1 16:22:52 Let's get that added NV, then we can see where to go from there. 16:22:53 ^I could pull together a list of all the #nosec 's that are in the code right now 16:23:04 smcginnis: awesome. 16:23:05 jonesn: That may be useful. 16:23:15 OK, anything else on this topic? 16:23:25 I think we're set 16:23:26 i'm still not sure how many of the current #nosec items are things that should be fixed vs just being disabled for now to have a clean run to start with 16:23:56 eharney: Would you want to review that first? 16:24:18 maybe, and we should probably decide that any time we add one, there's a good comment about why it's safe, or a bug report 16:24:28 +1 16:24:38 eharney: +1 16:24:45 eharney: with this set up we wouldn't have to have a clean run since it just checks the delta so we at least won't have to add anymore #nosecs for getting a clean run 16:24:59 sounds good 16:25:12 I did a grab on nosec and only get 9 entries 16:25:31 May be a good exercide for someone to go through and add comments on those explaining why they are not really issues. 16:25:33 also +1 for needing a comment for #nosecs 16:25:52 But I don't think that would need to be the gate on getting a nv job added that just checks the diff. 16:26:38 Alright, I'll move on for now. Please post a comment in #openstack-cinder to let folks know if a patch is submitted to add a job. 16:26:49 Will do. 16:27:02 We can always comment on there if any issues are thought of by then. 16:27:04 jonesn: Thanks! 16:27:12 #topic Dynamic Reconfiguration 16:27:19 Hello :) 16:27:20 diablo_rojo: All yours 16:27:39 #link https://specs.openstack.org/openstack/cinder-specs/specs/ocata/dynamic_reconfiguration.html Dynami reconfig spec 16:27:47 So with all the changes due to the A/A stuff I wanted to make sure the spec was still accurate 16:28:14 I know we had noted the approach we decided on might not be pretty now that those things have landed. 16:28:20 diablo_rojo: There hasn't been a patch to move this to Pike, right? 16:28:34 the formatting of that page looks borked for some reason 16:28:46 smcginnis, yeah 16:28:50 like the .rst was incorrectly formatted 16:28:58 hemna: Under the Work Items section? 16:29:06 Use cases 16:29:14 hemna, I figured I'd fix that up if we had other changes to make 16:29:23 Wanted to do it all at once. 16:29:25 there are a lot of indentation errors after bullet points, which makes it format funny 16:29:27 Oh, hmm. And Alternatives section too. 16:29:31 not sure what happened there. it didn't recognize the bullet points it looks like 16:29:59 diablo_rojo: I think if you can fix that up and propose a move to Pike, we can comment on there. 16:30:01 I think bullet points where missing space after * 16:30:24 diablo_rojo: It was already accepted previously, but we can have another review to make sure it still matches the current state of things. 16:30:43 So, I talked to geguileo a bit yesterday and there are two approaches that we could do right now. One: create a new mechanism and have drivers implement. or Two: Modify the sighup to stop all child processes and start the new ones with the new config. 16:31:06 smcginnis, right, just wanted to make sure the approach was still valid before I did a bunch of work and found out no one liked it anymore :) 16:31:11 diablo_rojo spec LGTM 16:31:17 can we safely stop processes though? There could be outstanding actions being taken 16:31:23 diablo_rojo I like the sighug approach 16:31:43 diablo_rojo: draining has the disadvantage of creating an outage while long-running operations (backup, copy to/from image) finish 16:31:44 hemna, yeah that was something I wondered. There could be ongoing processes that never get finished up? 16:31:53 yup 16:32:00 diablo_rojo: sighup sounds good for me 16:32:01 DuncanT, right. 16:32:03 since we don't really track transactions/actions being taken 16:32:17 copy volume <--> image 16:32:19 sighug :) 16:32:22 backup, etc 16:32:23 hemna: I think that comes under the generic heading of 'drain' 16:32:24 we are already using sighup to do the reload within Cinder 16:32:33 smcginnis, quiet. 16:32:39 DuncanT, we don't know what to drain at this point. 16:32:42 we do the drain using oslo services mechanism 16:33:09 Swanson: It just sounds so friendly. 16:33:17 smcginnis, hugged to death 16:33:21 also, we need to add more information in the deployer impact section 16:33:22 geguileo: So this feature is already implemented? Last I checked, our sighup handling was dangerously broken 16:33:34 (It has been a while) 16:33:35 +1 for *sighug*, we should start a new Big Tent project with that name 16:33:36 DuncanT: Last time I checked it worked 16:33:44 it's obvious, but if you change replication settings, you could potentially orphan existing replicated volumes 16:33:47 hemna, When I get the patch up to clean things up you can make comments and I will integrate them. Sound good? 16:33:49 DuncanT: that was for my Barcelona talk 16:34:00 diablo_rojo sure 16:34:10 geguileo: I'll take another look and see what I can break 16:34:14 the problem is that you are without service for as long as it's draining the service 16:34:21 hemna, cool I will let you know as soon as I get it up there 16:34:33 diablo_rojo: +1 16:34:36 bash: kill: SIGHUG: invalid signal specification :-( 16:34:39 and cinder backup and volume can take a long time when we are talking about the data plane 16:34:48 geguileo: Ah, yes, that was one of my definitions of 'broken', though that wasn't the dangerous one 16:34:53 geguileo, that's what concerns me 16:35:14 hemna, agreed. 16:35:18 hemna: DuncanT I believe that's what we are trying to fix now 16:36:06 diablo_rojo also need to note that changing things like FCZM settings can nuke existing attachments 16:36:19 geguileo: What you suggest is working is exactly what the spec proposes 16:36:26 hemna, can do. 16:36:50 DuncanT: No, I'm saying that the sighup mechanism is already there, we just need to modify it's behavior to whatever we agree 16:37:12 geguileo: The spec suggests drain and restart 16:37:29 DuncanT: Mmmmm, then we already have that 16:37:46 Lol 16:37:57 We're done! Beer time! 16:38:05 DuncanT: Yay! 16:38:34 geguileo accidentially implemented my spec lol 16:38:42 we could at least support adding new backends and removing them through sighup 16:38:44 :) 16:38:48 geguileo: But does it refresh from cinder.conf? 16:38:55 jungleboyj: yup 16:39:14 * jungleboyj is baffled ... 16:39:15 jungleboyj: at least I think so... now I'm unsure 16:39:34 geguileo: I am going to have to go try it. 16:40:50 Well, diablo_rojo and geguileo, maybe you too should talk a bit. 16:40:57 :-) 16:41:03 diablo_rojo: But I think getting it updated is probably worth it. 16:41:09 smcginnis, a bit more than we did yesterday anyway lol 16:41:14 I think the reloading of the config is a minor thing 16:41:14 :D 16:41:22 smcginnis, yep I put it towards the top of my todolist 16:41:33 diablo_rojo: Excellent 16:41:39 The big thing is if we have any ideas about how to prevent the service being out while draining 16:41:42 geguileo: The one man Cinder show. 16:41:51 jungleboyj, +1 16:42:37 diablo_rojo: If you get an update, probably good to add DuncanT and hemna as reviewers to make sure their concerns get addressed. 16:43:33 diablo_rojo: Maybe we could change it to make it only draing services that have changed the config or being completely removed 16:43:44 s/services/backends 16:43:53 smcginnis, can do 16:43:55 and spins up a new process for added backens 16:43:56 geguileo: Nice, would be good if it can be smart about it. 16:43:58 backends 16:44:26 diablo_rojo: I'm going to move on. I think you at least have next steps. 16:44:40 smcginnis, thanks :) 16:44:43 #topic Forum Topic Brainstorming 16:44:52 #link https://etherpad.openstack.org/p/BOS-TC-brainstorming Brainstorming Etherpad 16:45:03 They are looking for topics for the forum. 16:45:20 I've also created a Cinder specific one for us: 16:45:26 #link https://etherpad.openstack.org/p/BOS-Cinder-brainstorming Cinder topic brainstorming. 16:45:35 All captured here: 16:45:39 #link https://wiki.openstack.org/wiki/Forum/Boston2017 16:45:47 smcginnis: what much time do we have? 16:45:58 Some good topics from jgriffith. We should add those to the etherpad. 16:46:00 smcginnis: how is this different from design summit in the past? 16:46:13 xyang: Unfortunately I have no idea for any of those. 16:46:22 diablo_rojo: Any foundation guidance you can provide? 16:46:29 smcginnis hehe... I meant to add those as topics for todays meeting but :) 16:46:48 I updated the wiki to reflect that after I noticed I screwed it up 16:46:49 jgriffith: Probably good enough here. Will do that next 16:46:52 :) 16:47:01 Been there, done that. 16:47:38 smcginnis: we'll probably get an answer on what to do with translations? 16:47:48 Running out of time, so I'll move on. But add ideas and just know we may or may not be able to discuss at the forum depending on how timing is. 16:48:00 xyang: Yes, hoping that's finalized by then. 16:48:03 xyang remove them all! 16:48:04 smcginnis: I saw that as a forum topic 16:48:06 :P 16:48:10 #topic 3'rd party CI 16:48:13 hemna: :) 16:48:13 xyang: It is just more like the fishbowl sessions. 16:48:25 rm -f _LE, _LW, _LDIE! 16:48:32 hemna: :) 16:48:41 jgriffith: All yours now. 16:48:50 #link https://etherpad.openstack.org/p/cinder-ci-proposals Changing 3'rd party CI requirement 16:48:54 * jungleboyj shakes my head at hemna 16:48:56 smcginnis thanks! 16:49:10 Ok, so folks that were in ATL are familiar with this 16:49:18 at least if you stuck around Friday :) 16:49:29 So I tried to summarize in that etherpad 16:49:47 basically IMO we've stopped making any real forward/beneficial progress on 3'rd party CI 16:49:56 so... maybe we should try something different 16:50:40 The TLDR is stop *requiring* a true Continious integration for now 16:51:07 instead require that CI's respond to triggers: "run " and "run all-cis" 16:51:13 more like an admission that the current "continuous" requirement is not being met by nearly all 3rd party CI 16:51:22 bswartz correct 16:51:35 in other words, quit fooling ourselves :) 16:51:38 jgriffith: Doesn't seem like a terrible idea given where we are and the total lack of improvement in the last 12 months 16:51:46 and focus on actually tasks to improve 16:52:13 jgriffith: With the appendium that it's sad that we have to do this 16:52:15 by isolating these to a more periodic and concerned effort we can publish, analyze and focus on getting things fixed up 16:52:23 jgriffith, +1 16:52:28 DuncanT yeah, but such is life 16:52:44 So if folks want, take a look at the ehterpad and add comments/suggestions 16:53:10 Not biting off more than we can chew. start small and work our way up from there to actually make things better. 16:53:37 I'd propose doing both a dummy patch that just pulls from master and runs everything AND a known "everything should fail this" test 16:53:39 DuncanT: it turns out that maintaining 3rd party CIs is like a full time job for some people, and I don't that that was ever the intention when we started down this road it Atlanta (that ATL design summit not the PTG) 16:53:59 jgriffith: I like the "pipecleaner" everything should fail on this patch 16:54:13 bswartz DuncanT so the only caveat is that you'll still have to maintain a CI, it jus won't have the same load demand or elasticity requirements 16:54:24 There goes my side business of running CIs for companies. :D 16:54:25 smcginnis yeah, that's awesome! 16:54:53 How many of us look at driver CI for a driver patch? 16:55:02 and this way, as we go along if we get better we can do things like up the frequency or revisit ture continious 16:55:06 I'm still not going to +A a patch for a driver, unless it passes CI 16:55:25 hemna read the etherpad, I knew you'd say that :) 16:55:28 that's consistent I think 16:55:33 hemna: yeah that's important 16:55:37 and I think that's perfectly reasonable/valid 16:55:41 bswartz: Yeah... I think the fact that it is a full time job suggests that something, somewhere needs some serious rethought, but that turns out to be a very big topic and probably outside of the cinder remit 16:55:42 hemna: I do. And I guess the good thing about this is we can just trigger extra runs on the ones we care about. 16:55:42 ok coolio. 16:55:44 vendors CIs should run on vendor driver patches 16:56:03 even if it means a core has to add the comment "run xyz" that's fine 16:56:08 +1 16:56:10 jgriffith,+1 16:56:22 Side bonus - less scrolling in gerrit. ;) 16:56:24 if you're an overachiever your CI will already be running it, OR you'll add the comment to the review when you submit it 16:56:29 jgriffith: +1 16:56:39 smcginnis hehe 16:56:41 jgriffith: Good point. There's no reason you can't run on all patches if you want to. 16:57:03 Note, I did point out that this was the *requirement* but that folks can continue testing every patch if they want 16:57:27 Going to run out of time, so let's let folks read up on the etherpad. 16:57:28 I should clarify that.. if you're CI sucks and your cluttering things with nonsense you will be dealt with harshly :) 16:57:33 kk 16:57:36 #topic Filtering and the API 16:57:38 +1 16:57:41 F'ing FILTERS 16:57:43 it should also be pointed out that continual failing will be a lot less acceptable if you only have to run once a week 16:57:46 can we stop the madness! 16:57:53 bswartz: +1 16:57:53 jgriffith: Filters are good 16:58:05 seriously, the filtering every which way to Sunday is ludicrous 16:58:16 DuncanT: I actually did say "let me put on my Duncan hat" when this came up at the PTG. 16:58:18 Let's have a generic filtering mechanism and sotp 16:58:20 OpenStack - we love the madness. 16:58:40 generic, consistant filtering mechanism +1 16:58:40 I have a spec and first round patch... 16:58:41 hemna: Nice. A new mantra 16:58:42 * jgriffith gets links 16:58:49 * bswartz eats popcorn 16:59:00 * jungleboyj throws popcorn 16:59:01 https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/generalized-filtering-for-cinder-list-resource 16:59:17 #link https://review.openstack.org/#/q/status:open+project:openstack/cinder+branch:master+topic:bp/generalized-filtering-for-cinder-list-resource Filtering patches 16:59:25 https://review.openstack.org/#/c/441516/ 16:59:41 So there's a start of things, and the spec 17:00:10 jgriffith: Filter by tenant (for admins) is missing and important 17:00:14 My proposal is that we can go ahead and expose everything the DB lets us fitler on if we want, by using a --filter arg and a json file to control what the admin wants to allow 17:00:32 Sorry, out of time. 17:00:32 DuncanT it's just another filter k/v pair isn't it? 17:00:43 volume list filter=tenant:id 17:00:44 ok 17:00:50 Let's go over to #openstack-cinder. 17:00:52 til next time dirty rotten filters! 17:00:55 jgriffith: I'm just trying to figure that out, but yeah, I think so 17:00:56 hehe 17:01:00 Thanks everyone. 17:01:03 #endmeeting