16:00:08 #startmeeting Cinder 16:00:13 Meeting started Wed Dec 16 16:00:08 2015 UTC and is due to finish in 60 minutes. The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:14 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:17 The meeting name has been set to 'cinder' 16:00:19 hi 16:00:21 Agenda: https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting 16:00:24 hi 16:00:26 hi 16:00:34 Courtesy ping: dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang tbarron scottda erlon rhedlind 16:00:39 hi 16:00:42 hi 16:00:44 hi 16:00:44 Hi! 16:00:47 hi 16:00:48 smcginnis: Thaks :-) 16:00:49 hi. 16:00:49 Hey everyone. 16:00:54 o/ 16:00:56 smcginnis: thanks! 16:01:06 hi 16:01:23 #topic Announcements 16:01:32 "Add your IRC nick to this list to be pinged at the start of the meeting" I like this feature:) 16:01:49 General info - voting has started for N and O naming. Yay! 16:02:06 o/ 16:02:06 You probably should have received an email from Monty if you've been contributing. 16:02:14 Null or Nameless - not bed! 16:02:25 Nameless or Null. There are no other choices. ;) 16:02:29 e0ne: Nameless would be kind of funny. :) 16:02:40 haha 16:02:42 totally 16:02:45 Too bad there's not a None Texas. 16:02:47 I think Null will cause all kinds of errors 16:02:54 yes:) I voted for Fortune in the past 16:03:04 Hello. 16:03:07 hey 16:03:09 Hey :) 16:03:16 #topic Release Status 16:03:20 Hello! 16:03:25 #link https://etherpad.openstack.org/p/mitaka-cinder-spec-review-tracking Spec tracking 16:03:39 I've updated a few of the specs on there to include links to the patches actually implementing them. 16:03:58 If you're driving any of those specs, feel free to add any links to pieces you think need attention. 16:04:07 I'll try to use that as a focus for reviews. 16:04:50 #link http://ci-watch.tintri.com/project?project=cinder&time=7+days 16:04:59 CI results are still a little mixed. 16:05:07 I've contacted the largest offenders. 16:05:21 I think those of you having CI issues are aware of it. 16:05:33 Please make sure that is getting the attention it needs internally. 16:05:48 smcginnis: are we going to drop drivers if CI will be unstable? 16:05:55 I will likely post removal patches soon if a few of them don't stabilize. 16:05:56 I mean to to it in Mitaka 16:06:10 e0ne: Yeah, I think we need to enforce our policies. 16:06:53 smcginnis: sounds reasonable. probably, you have to mail to maintainers soon to notify everybody 16:07:25 #info Bug stats: Cinder- 473, cinderclient- 37, os-brick- 12 16:07:28 smcginnis: yes, you need to be very bold on the policies or we going to have a lot of complainers about that 16:07:35 Stats are looking a little better. 16:07:39 erlon: Agreed 16:07:42 Third-party-announce list deserves a friendly reminder too. 16:07:53 dulek: +1 16:08:11 dulek: Good call, I'll post something on there before posting any patches. 16:08:31 #link https://bugs.launchpad.net/nova/+bugs?field.status:list=NEW&field.tag=volumes 16:08:52 Anyone that can spend some time there ^^ - help is always appreciated triaging and providing input. 16:09:12 OK, that's all I've really got... 16:09:21 #topic Storing dicts on driver 'provider_location' 16:09:30 erlon, dulek: Hey 16:09:41 So, this topic came out after thingee noticed that some drivers (including Hitachi) could be using metadata to store lun id information, which could be potentially dangerous and cause a security issue. 16:10:10 Reviewing the Hitachi driver I saw that the driver needs to store more that 1 field about the volume, tried to move the metadata to the provider_location but, this field can not be stored in the form of a dictionary, so, them came the question on why that couldn't be stored in a dict. 16:10:13 we're going to store provider_location in our DB 16:10:16 There is also a provider_id field in addition to provider_location 16:10:27 at least, I'm going to publish a spec for it 16:10:37 provider_id is ideal for storing lun id kind of info 16:10:59 xyang2: mhm 16:11:13 e0ne: will them be possible to store more info than a string? 16:11:48 erlon: yes, json (dict) sounds like a good solution 16:12:06 e0ne: great! 16:12:13 Is there a strong reason not to just use json*.dumps/loads for those that need more than what's already there? 16:12:26 isn't the length typically limited to VARCHAR(255)? not much space there. 16:12:36 flip214: +1 16:12:51 flip214: +1 16:13:11 We can extend that (anyone knew that expanding VARCHAR is online on MySQL? :D). 16:13:11 erlon, flip214: do you have an example of the data, which can't be fit in 255 chars json? 16:13:32 It ultimately needs to be a string in the db, right? I'd be hesitant to allow more than 255. That already seems like a lot of space. 16:13:55 dulek: it's not a big deal to extend. do we know how big should it be? 16:14:20 e0ne: actually don't recall any situation right now 16:14:34 but if its not a big deal to extend 16:14:42 erlon: it would be helpful to get some input for it 16:14:43 e0ne: I get the point. 16:14:57 provider_id is 255 and provider_location is another 255, still not enough? 16:14:59 https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/models.py#L160 16:15:12 I'd like to see a compelling reason to make it bigger. 16:15:27 smcginnis: +2 16:15:42 e0ne: the DRBD resource configuration file can easily be 2kB text, if there are quite a few nodes and options set. 16:15:48 smcginnis: that is my point: we need use-cases for it 16:16:09 not that this would need to be stored there right now; JFI. 16:16:14 flip214: Do you really need to store an entire config file for every volume in the database? 16:16:22 flip214: OK, whew. :) 16:16:39 smcginnis: no. 16:16:42 kind of seems like if we really need more data something with more of a key-value store might make more sense than forcing drivers to stash json data in a single field? 16:16:54 just saying that there are things that are bigger than 255 bytes ;) 16:17:15 But I still question whether more than that could/should be stored in the database. 16:17:44 patrickeast: TBH, I don't like the idea of a table with ID KEY VALUE and each being limited to 255 or so (again!).... 16:18:22 Doesn't some driver store internal info in admin_metadata? 16:18:33 Putting large dumps of data into the DB is likely to slow Cinder down. That is concerning. 16:18:43 erlon, dulek: Any other discussion/input needed, or should we wait for a spec? 16:18:51 jungleboyj: I'm totally agree with you 16:18:58 jungleboyj: +1 16:19:53 smcginnis: I'll seek for use cases on cinder drivers to see how many and how other drivers are using the provider location 16:19:54 jungleboyj: The data is already stored there now as metadata, right? 16:20:16 smcginnis: then wen can know if there are cases that justify the implementation 16:20:55 smcginnis: Well, I don't have a driver, I'm hardly a person that should work on it. 16:20:58 erlon: That makes sense to me. I'd like to see more data before making any major changes. 16:21:06 smcginnis: for the Hitachi driver the question cames only becouse I thoug weird to have to use json.dumps() to store that kind of info 16:21:08 dulek: ;) 16:21:26 geguileo: I thought this was additional data they were looking to put out there. 16:21:27 smcginnis: one more point 16:21:40 Mike suggested to log a bug on the driver 16:21:55 jungleboyj: Maybe I missunderstood, I thought this was about moving that data 16:22:25 smcginnis: in the end I find out that the metadata field is duplicated in provider_location, and the driver uses only the provider_location to retrieve the volume 16:22:30 erlon: Wasn't it determined that it's only used on initial connection, so there wasn't a security concern in someone changing it after the fact? 16:22:37 erlon: Oh! 16:22:39 so, is not a security problem in the driver 16:23:12 erlon: Then yeah, I'd say file a bug and have them clean up that. No point in storing it in both locations. 16:23:31 smcginnis: ok 16:23:44 I think they are one of them at risk of removal if CI doesn't shape up, but that's another issue. 16:24:15 smcginnis: ok, we can talk about that later 16:24:18 erlon: OK, any other input needed for now? 16:24:30 smcginnis: for now Im fine 16:24:39 Great. Thanks! 16:24:47 #topic python-cinderclient: tempest vs functional tests 16:24:53 e0ne: You're up. 16:24:56 thanks 16:25:29 the issue is: we and many other python-*clients uses tempest to verify that new version works with rest of services 16:25:38 but tempest doesn't use cinderclient 16:26:05 we run ~1500 tests, only few of them are related (nova attach features) to cinderclietn 16:26:31 so I propose to add such tests to cinderclient functional tests and drop tempest job 16:26:49 it will make our CI for 30 minutes faster! 16:27:11 e0ne: I'm all for that. 16:27:14 we don't need to run full tempest for each commit to python-cinderclient 16:27:28 e0ne: I'm an advocate for _effective_ testing. 16:27:38 btw, obutenko volunteered to help me with it 16:27:48 obutenko: Thanks! 16:27:53 we can start it soon 16:27:55 e0ne: should we do that for cinder too, maybe later? 16:28:04 AFIAK, other project will do it it 16:28:22 xyang2: good idea, I'm for it 16:28:30 xyang2: +1 16:28:37 xyang2: let's start with cinderclient - it will be easier and faster 16:28:47 e0ne: sounds good 16:28:59 We keep increasing Jenkins load, but I'm not convinced we're actually adding value. It would be good to get rid of parts that aren't necessary. 16:29:21 Anyone else have thoughts or input? 16:29:34 xyang2: You mean drop tempest from Cinder? This seems odd, functional tests aren't integration tests like Tempest. 16:29:49 smcginnis: I'm, talking about decreasing numbers of tests and time for them 16:30:00 e0ne: Yep! 16:30:09 I am all for reducing test time. 16:30:20 dulek: we mean to move func test to cinder and leave only integration tests in tempest 16:30:25 so maybe a dumb question, but why isn't tempest using cinderclient for the volume tests? 16:30:43 patrickeast: It calls the API directly. That's how Tempest work. 16:30:44 should we maybe switch that so we *are* getting additional testing from it? then maybe just restrict which tempest tests we run? 16:30:46 patrickeast: tempest uses own client to test any APIs 16:31:38 patrickeast: IMO, tempest should verify that API works as documented and cross-project integration isn't broken 16:31:50 patrickeast: I think they don't want any additional code that could be source of bugs 16:31:52 patrickeast: functional tests should be implemented inside of each project 16:32:46 e0ne, +1 about this ( functional tests should be implemented inside of each project ) 16:32:57 ok, so the line that was drawn is that tempest shouldnt use clients just direct api's 16:32:58 got it 16:33:09 smcginnis: so, did we agreed to drop tempest for cinderclient after related tests will be implemented as cinderclient functional tests? 16:34:08 or is anybody against proposed solution? 16:34:13 e0ne: I think so. If it's not actually exercising the cinderclient code, then it really isn't helping much. 16:34:29 smcginnis: +1 16:35:08 e0ne: I guess we'll see when the patch get submitted to change it if there are any other strong opinions. 16:35:23 smcginnis: got it! 16:35:34 e0ne: OK, good for now? 16:35:36 thats' all from my side about this topic 16:35:40 e0ne: Thanks! 16:35:45 #topic API races patches 16:35:49 geguileo: You're up. 16:35:52 thanks averybody for feedback 16:35:57 Thanks 16:36:03 s/averybody/everybody 16:36:07 Just wanted to bring attention to API races patches 16:36:16 We really want them merged soon 16:36:29 So they are thoroughly tested 16:36:41 Some are simple, but others are a little more complex 16:37:01 we have to make it high priority for reviews 16:37:09 geguileo: Is the order of the patch links in the agenda relevant? 16:37:19 to be sure that they will be landed in M-2 16:37:21 IMO 16:37:32 smcginnis: For my patches it is 16:37:37 Because they are in a chain 16:37:44 geguileo: OK, thanks. 16:37:54 But bluex has created a new one and that can be reviewed on its own 16:38:17 geguileo: I did notice you've done a great job keeping the blueprint whiteboard organized. 16:38:33 I did because it was a mess of patches otherwise XD 16:38:41 geguileo: Yeah, definitely. ;) 16:38:46 geguileo: How many more patches will it be? 16:38:48 geguileo: your patches requires new sqlalchemy. do you know when it will be released and added to global-requirements? 16:39:06 #link https://blueprints.launchpad.net/cinder/+spec/cinder-volume-active-active-support 16:39:06 e0ne: I'm not sure when that will be released 16:39:12 geguileo: Maybe we can prioritize APIs needing the patches? 16:39:13 e0ne: I'll ask and add it to the BP 16:39:29 geguileo: what if it will be released after Mitaka? 16:39:32 I have split in the BP patches that require the new version 16:39:36 And those that don't 16:39:43 geguileo: how much does it affect us? 16:39:55 e0ne: Then only half the patches will merge 16:40:15 geguileo: How critical are those? 16:40:45 e0ne: Extend, volume_upload_image, migrate, retype, backups 16:41:19 geguileo: could you please mark somehow patches in a commit message that they require new sqlalchemy? 16:41:31 e0ne: They are ordered in the BP 16:41:37 geguileo: thanks 16:41:42 e0ne: Under: *Ready for review but need a new SQLAlchemy release (version 1.0.10):* 16:41:47 one more question 16:41:49 Is zzzeek around to answer about SQLalchemy release? 16:42:01 do we need new oslo.db release for it? 16:42:14 scottda: I'll ping him on #sqlalchemy-devel 16:43:00 e0ne: Not that I know 16:43:08 e0ne: We just need to update our requirements 16:43:22 geguileo: it's good. less dependencies is better 16:43:59 That's all I wanted to say 16:44:09 geguileo: OK, thanks! 16:44:11 geguileo: thanks for you work on this! 16:44:29 #topic Open discussion 16:44:47 So call out to all reviewers - if we could get some focus on those patches... 16:44:59 I'd also love to see some focus on the new drivers. 16:45:13 I'd like to avoid the end of milestone crunch if at all possible. 16:45:23 There are a few out there that have been waiting for feedback. 16:45:24 smcginnis: do you have a list of patches with new drivers? 16:45:26 smcginnis: so, about Hitachi CIs 16:45:34 before and after gerrit goes down for maintenance 16:45:37 If anyone has time to take a look, any reviews help. 16:45:41 So if it's open - do we care to support Keystone V2 API? We've got regression in Liberty - quota calls work only with Keystone V3. 16:45:51 kmartin: good point :) 16:45:59 e0ne: Not yet, but maybe I'll add a bit to the spec etherpad just to capture them somewhere easy to find. 16:46:26 erlon: They've been contacted and supposedly working on it. We'll see. 16:46:33 smcginnis: Do we have a list of the drivers that are waiting for review somewhere? 16:46:34 smcginnis: we are going throug a series of infra upgrades, so, those last weeks we needed to stop our CIs for some time 16:46:52 Oops, yeah, what e0ne asked. 16:46:53 jungleboyj: See response to e0ne. :) 16:47:06 dulek: IMO, we can depricate keystone api v2 in M or early in N 16:47:19 smcginnis: Done. :-) 16:47:22 * jungleboyj is slow today. 16:47:24 AFAIK, cinder works well with keystone api v3 only 16:47:27 erlon: No worries. I think we all have some down time. As long as it doesn't stretch on too long. 16:47:29 smcginnis: we, will still have those issues at least until early January, when we finish all infra upgrade 16:47:30 jungleboyj: ;) 16:47:30 NetApp FlashRay driver patch is https://review.openstack.org/253695 16:47:40 jungleboyj: did you get your morning coffee? 16:48:08 e0ne: Do you know if they are trying to push folks to v3. That would be my assumption. 16:48:31 e0ne: If we'll deprecate it we still should support it for some releases. 16:48:35 e0ne: A whole pot plus another big cup. Hasn't fixed my issues yet. 16:48:37 e0ne: Or v3 is not deployed commonly enough that we should do v2 until it is? 16:48:43 smcginnis: v2 is deprecated, keystone team wants to use v3 only 16:48:48 e0ne: Keystone v2 won't be gone until M+4 release. 16:49:06 dulek: TBH, it will be supported forever:( 16:49:10 e0ne: Then we should probably support that and go with v3. Unless that's an issue for operators I suppose. 16:49:12 armax, ajo: guys, do you mind looking at https://review.openstack.org/#/c/254224/ (the rbac-qos-spec) ? 16:49:18 dulek: openstack can't drop old APIs 16:49:25 it was discussed at summit 16:49:55 smcginnis: we can start with mails to ops and dev MLs 16:50:01 e0ne: Huh, would need to check again, but some guy from Neutron told me that V2 is deprecated in Mitaka and will be gone in R. 16:50:19 dulek: I tried to delete v1 from Cinder 16:50:25 dulek: That may be what they would like to happen, but probably not. :) 16:50:27 e0ne: Yeah, I know. ;) 16:51:14 #link https://etherpad.openstack.org/p/mitaka-deprecation-policy 16:51:26 So are we saying that Cinder supports V3 only, or should we fix api.contrib.quotas to be compatible with V2? 16:51:36 How to drop an API [version or feature]? 16:51:36 Don't. Deprecation is separate from entirely dropping support for APIs, though. (Deprecated, but never removed) 16:51:43 e0ne: I see… 16:52:10 dulek: How different are v2 and v3? Is it much of an effort to support both? 16:52:17 we can say that we support features A, B, C only with keystone API v3 16:52:22 heat did the same 16:52:32 project_id and tenant_id and nested quotas. 16:52:36 smcginnis: they are very different 16:52:45 That's the differences from what I know. 16:52:48 Darn 16:53:16 Hm, but hey, will anybody deploy Keystone with only V2 if V2 is deprecated? 16:53:32 dulek: They shouldn't, but they probably will. ;) 16:53:45 smcginnis: :> 16:53:50 dulek: Actually, I think the concern would be existing deployments, not new ones. 16:53:58 dulek: a lot of vendors and operators use depricated APIs:( 16:54:05 It would be great to have some ops input. 16:54:17 yup, we are still running keystone v2, only. 16:54:19 with Juno 16:54:20 smcginnis: +1. I mentioned it earlier 16:54:33 e0ne: Yep :) 16:54:45 Okay, so I'll get someone on my team to look at it. If it's easy enough we'll propose a patch and maybe a backport to Liberty. 16:54:55 winston-d_: Thanks. So it would be an issue if we only supported v3. 16:54:56 * dulek wrote bugport at first… 16:55:05 :) 16:55:18 dulek: Sounds good. 16:55:28 Oh, I forgot an announcement. 16:55:43 I recently found this cross project spec was approved: 16:55:45 #link http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html 16:55:51 smcginnis: we can support only V3 for nested quotas 16:55:59 smcginnis: great news! 16:56:09 If anyone wants to look at that for cinder, have at it. 16:56:26 smcginnis: it will be harder to tests RPC versioned objecs, but I like this idea 16:56:39 smcginnis: will do it 16:56:46 e0ne: Will it? I don't see the problem. 16:56:50 e0ne: Awesome! 16:56:58 e0ne: I'm aware of nested quotas incompatibility. We'll look at it 16:57:01 smcginnis: I proposed it some time ago, so I have to finish it 16:57:24 dulek: DuncanT had a consern about dropping downgrade migrations 16:57:41 Alright, let's continue any discussions in #openstack-cinder 16:57:43 Thanks everyone. 16:57:57 #endmeeting