16:00:08 <smcginnis> #startmeeting Cinder
16:00:13 <openstack> Meeting started Wed Dec 16 16:00:08 2015 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:14 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:17 <openstack> The meeting name has been set to 'cinder'
16:00:19 <e0ne> hi
16:00:21 <smcginnis> Agenda: https://wiki.openstack.org/wiki/CinderMeetings#Next_meeting
16:00:24 <xyang2> hi
16:00:26 <rhedlind> hi
16:00:34 <smcginnis> Courtesy ping: dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang tbarron scottda erlon rhedlind
16:00:39 <scottda> hi
16:00:42 <tbarron> hi
16:00:44 <jseiler> hi
16:00:44 <geguileo> Hi!
16:00:47 <eharney> hi
16:00:48 <geguileo> smcginnis: Thaks  :-)
16:00:49 <mtanino> hi.
16:00:49 <smcginnis> Hey everyone.
16:00:54 <mriedem> o/
16:00:56 <erlon> smcginnis: thanks!
16:01:06 <erlon> hi
16:01:23 <smcginnis> #topic Announcements
16:01:32 <e0ne> "Add your IRC nick to this list to be pinged at the start of the meeting" I like this feature:)
16:01:49 <smcginnis> General info - voting has started for N and O naming. Yay!
16:02:06 <thangp> o/
16:02:06 <smcginnis> You probably should have received an email from Monty if you've been contributing.
16:02:14 <e0ne> Null or Nameless - not bed!
16:02:25 <dulek> Nameless or Null. There are no other choices. ;)
16:02:29 <smcginnis> e0ne: Nameless would be kind of funny. :)
16:02:40 <erlon> haha
16:02:42 <erlon> totally
16:02:45 <smcginnis> Too bad there's not a None Texas.
16:02:47 <scottda> I think Null will cause all kinds of errors
16:02:54 <e0ne> yes:) I voted for Fortune in the past
16:03:04 <jungleboyj> Hello.
16:03:07 <patrickeast> hey
16:03:09 <diablo_rojo> Hey :)
16:03:16 <smcginnis> #topic Release Status
16:03:20 <baumann> Hello!
16:03:25 <smcginnis> #link https://etherpad.openstack.org/p/mitaka-cinder-spec-review-tracking Spec tracking
16:03:39 <smcginnis> I've updated a few of the specs on there to include links to the patches actually implementing them.
16:03:58 <smcginnis> If you're driving any of those specs, feel free to add any links to pieces you think need attention.
16:04:07 <smcginnis> I'll try to use that as a focus for reviews.
16:04:50 <smcginnis> #link http://ci-watch.tintri.com/project?project=cinder&time=7+days
16:04:59 <smcginnis> CI results are still a little mixed.
16:05:07 <smcginnis> I've contacted the largest offenders.
16:05:21 <smcginnis> I think those of you having CI issues are aware of it.
16:05:33 <smcginnis> Please make sure that is getting the attention it needs internally.
16:05:48 <e0ne> smcginnis: are we going to drop drivers if CI will be unstable?
16:05:55 <smcginnis> I will likely post removal patches soon if a few of them don't stabilize.
16:05:56 <e0ne> I mean to to it in Mitaka
16:06:10 <smcginnis> e0ne: Yeah, I think we need to enforce our policies.
16:06:53 <e0ne> smcginnis: sounds reasonable. probably, you have to mail to maintainers soon to notify everybody
16:07:25 <smcginnis> #info Bug stats: Cinder- 473, cinderclient- 37, os-brick- 12
16:07:28 <erlon> smcginnis: yes, you need to be very bold on the policies or we going to have a lot of complainers about that
16:07:35 <smcginnis> Stats are looking a little better.
16:07:39 <smcginnis> erlon: Agreed
16:07:42 <dulek> Third-party-announce list deserves a friendly reminder too.
16:07:53 <e0ne> dulek: +1
16:08:11 <smcginnis> dulek: Good call, I'll post something on there before posting any patches.
16:08:31 <smcginnis> #link https://bugs.launchpad.net/nova/+bugs?field.status:list=NEW&field.tag=volumes
16:08:52 <smcginnis> Anyone that can spend some time there ^^ - help is always appreciated triaging and providing input.
16:09:12 <smcginnis> OK, that's all I've really got...
16:09:21 <smcginnis> #topic Storing dicts on driver 'provider_location'
16:09:30 <smcginnis> erlon, dulek: Hey
16:09:41 <erlon> So, this topic came out after thingee noticed that some drivers (including Hitachi) could be using metadata to store lun id information, which could be potentially dangerous and cause a security issue.
16:10:10 <erlon> Reviewing the Hitachi driver I saw that the driver needs to store more that 1 field about the volume, tried to move the metadata to the provider_location but, this field can not be stored in the form of a dictionary, so, them came the question on why that couldn't be stored in a dict.
16:10:13 <e0ne> we're going to store provider_location in our DB
16:10:16 <xyang2> There is also a provider_id field in addition to provider_location
16:10:27 <e0ne> at least, I'm going to publish a spec for it
16:10:37 <xyang2> provider_id is ideal for storing lun id kind of info
16:10:59 <erlon> xyang2: mhm
16:11:13 <erlon> e0ne: will them be possible to store more info than a string?
16:11:48 <e0ne> erlon: yes, json (dict) sounds like a good solution
16:12:06 <erlon> e0ne: great!
16:12:13 <smcginnis> Is there a strong reason not to just use json*.dumps/loads for those that need more than what's already there?
16:12:26 <flip214> isn't the length typically limited to VARCHAR(255)? not much space there.
16:12:36 <dulek> flip214: +1
16:12:51 <erlon> flip214: +1
16:13:11 <dulek> We can extend that (anyone knew that expanding VARCHAR is online on MySQL? :D).
16:13:11 <e0ne> erlon, flip214: do you have an example of the data, which can't be fit in 255 chars json?
16:13:32 <smcginnis> It ultimately needs to be a string in the db, right? I'd be hesitant to allow more than 255. That already seems like a lot of space.
16:13:55 <e0ne> dulek: it's not a big deal to extend. do we  know how big should it be?
16:14:20 <erlon> e0ne: actually don't recall any situation right now
16:14:34 <erlon> but if its not a big deal to extend
16:14:42 <e0ne> erlon: it would be helpful to get some input for it
16:14:43 <dulek> e0ne: I get the point.
16:14:57 <xyang2> provider_id is 255 and provider_location is another 255, still not enough?
16:14:59 <xyang2> https://github.com/openstack/cinder/blob/master/cinder/db/sqlalchemy/models.py#L160
16:15:12 <smcginnis> I'd like to see a compelling reason to make it bigger.
16:15:27 <e0ne> smcginnis: +2
16:15:42 <flip214> e0ne: the DRBD resource configuration file can easily be 2kB text, if there are quite a few nodes and options set.
16:15:48 <e0ne> smcginnis: that is my point: we need use-cases for it
16:16:09 <flip214> not that this would need to be stored there right now; JFI.
16:16:14 <smcginnis> flip214: Do you really need to store an entire config file for every volume in the database?
16:16:22 <smcginnis> flip214: OK, whew. :)
16:16:39 <flip214> smcginnis: no.
16:16:42 <patrickeast> kind of seems like if we really need more data something with more of a key-value store might make more sense than forcing drivers to stash json data in a single field?
16:16:54 <flip214> just saying that there are things that are bigger than 255 bytes ;)
16:17:15 <smcginnis> But I still question whether more than that could/should be stored in the database.
16:17:44 <flip214> patrickeast: TBH, I don't like the idea of a table with ID KEY VALUE and each being limited to 255 or so (again!)....
16:18:22 <dulek> Doesn't some driver store internal info in admin_metadata?
16:18:33 <jungleboyj> Putting large dumps of data into the DB is likely to slow Cinder down.  That is concerning.
16:18:43 <smcginnis> erlon, dulek: Any other discussion/input needed, or should we wait for a spec?
16:18:51 <e0ne> jungleboyj: I'm totally agree with you
16:18:58 <smcginnis> jungleboyj: +1
16:19:53 <erlon> smcginnis: I'll seek for use cases on cinder drivers to see how many and how other drivers are using the provider location
16:19:54 <geguileo> jungleboyj: The data is already stored there now as metadata, right?
16:20:16 <erlon> smcginnis: then wen can know if there are cases that justify the implementation
16:20:55 <dulek> smcginnis: Well, I don't have a driver, I'm hardly a person that should work on it.
16:20:58 <smcginnis> erlon: That makes sense to me. I'd like to see more data before making any major changes.
16:21:06 <erlon> smcginnis: for the Hitachi driver the question cames only becouse I thoug weird to have to use json.dumps() to store that kind of info
16:21:08 <smcginnis> dulek: ;)
16:21:26 <jungleboyj> geguileo: I thought this was additional data they were looking to put out there.
16:21:27 <erlon> smcginnis: one more point
16:21:40 <erlon> Mike suggested to log a bug on the driver
16:21:55 <geguileo> jungleboyj: Maybe I missunderstood, I thought this was about moving that data
16:22:25 <erlon> smcginnis: in the end I find out that the metadata field is duplicated in provider_location, and the driver uses only the provider_location to retrieve the volume
16:22:30 <smcginnis> erlon: Wasn't it determined that it's only used on initial connection, so there wasn't a security concern in someone changing it after the fact?
16:22:37 <smcginnis> erlon: Oh!
16:22:39 <erlon> so, is not a security problem in the driver
16:23:12 <smcginnis> erlon: Then yeah, I'd say file a bug and have them clean up that. No point in storing it in both locations.
16:23:31 <erlon> smcginnis: ok
16:23:44 <smcginnis> I think they are one of them at risk of removal if CI doesn't shape up, but that's another issue.
16:24:15 <erlon> smcginnis: ok, we can talk about that later
16:24:18 <smcginnis> erlon: OK, any other input needed for now?
16:24:30 <erlon> smcginnis: for now Im fine
16:24:39 <smcginnis> Great. Thanks!
16:24:47 <smcginnis> #topic python-cinderclient: tempest vs functional tests
16:24:53 <smcginnis> e0ne: You're up.
16:24:56 <e0ne> thanks
16:25:29 <e0ne> the issue is: we and many other python-*clients uses tempest to verify that new version works with rest of services
16:25:38 <e0ne> but tempest doesn't use cinderclient
16:26:05 <e0ne> we run ~1500 tests, only few of them are related (nova attach features) to cinderclietn
16:26:31 <e0ne> so I propose to add such tests to cinderclient functional tests and drop tempest job
16:26:49 <e0ne> it will make our CI for 30 minutes faster!
16:27:11 <smcginnis> e0ne: I'm all for that.
16:27:14 <e0ne> we don't need to run full tempest for each commit to python-cinderclient
16:27:28 <smcginnis> e0ne: I'm an advocate for _effective_ testing.
16:27:38 <e0ne> btw, obutenko volunteered to help me with it
16:27:48 <smcginnis> obutenko: Thanks!
16:27:53 <e0ne> we can start it soon
16:27:55 <xyang2> e0ne: should we do that for cinder too, maybe later?
16:28:04 <e0ne> AFIAK, other project will do it it
16:28:22 <e0ne> xyang2: good idea, I'm for it
16:28:30 <smcginnis> xyang2: +1
16:28:37 <e0ne> xyang2: let's start with cinderclient - it will be easier and faster
16:28:47 <xyang2> e0ne: sounds good
16:28:59 <smcginnis> We keep increasing Jenkins load, but I'm not convinced we're actually adding value. It would be good to get rid of parts that aren't necessary.
16:29:21 <smcginnis> Anyone else have thoughts or input?
16:29:34 <dulek> xyang2: You mean drop tempest from Cinder? This seems odd, functional tests aren't integration tests like Tempest.
16:29:49 <e0ne> smcginnis: I'm, talking about decreasing numbers of tests and time for them
16:30:00 <smcginnis> e0ne: Yep!
16:30:09 <jungleboyj> I am all for reducing test time.
16:30:20 <e0ne> dulek: we mean to move func test to cinder and leave only integration tests in tempest
16:30:25 <patrickeast> so maybe a dumb question, but why isn't tempest using cinderclient for the volume tests?
16:30:43 <dulek> patrickeast: It calls the API directly. That's how Tempest work.
16:30:44 <patrickeast> should we maybe switch that so we *are* getting additional testing from it? then maybe just restrict which tempest tests we run?
16:30:46 <e0ne> patrickeast: tempest uses own client to test any APIs
16:31:38 <e0ne> patrickeast: IMO, tempest should verify that API works as documented and cross-project integration isn't broken
16:31:50 <erlon> patrickeast: I think they don't want any additional code that could be  source of bugs
16:31:52 <e0ne> patrickeast: functional tests should be implemented inside of each project
16:32:46 <obutenko> e0ne, +1 about this ( functional tests should be implemented inside of each project )
16:32:57 <patrickeast> ok, so the line that was drawn is that tempest shouldnt use clients just direct api's
16:32:58 <patrickeast> got it
16:33:09 <e0ne> smcginnis: so, did we agreed to drop tempest for cinderclient after related tests will be implemented as cinderclient functional tests?
16:34:08 <e0ne> or is anybody against proposed solution?
16:34:13 <smcginnis> e0ne: I think so. If it's not actually exercising the cinderclient code, then it really isn't helping much.
16:34:29 <e0ne> smcginnis: +1
16:35:08 <smcginnis> e0ne: I guess we'll see when the patch get submitted to change it if there are any other strong opinions.
16:35:23 <e0ne> smcginnis: got it!
16:35:34 <smcginnis> e0ne: OK, good for now?
16:35:36 <e0ne> thats' all from my side about this topic
16:35:40 <smcginnis> e0ne: Thanks!
16:35:45 <smcginnis> #topic API races patches
16:35:49 <smcginnis> geguileo: You're up.
16:35:52 <e0ne> thanks averybody for feedback
16:35:57 <geguileo> Thanks
16:36:03 <e0ne> s/averybody/everybody
16:36:07 <geguileo> Just wanted to bring attention to API races patches
16:36:16 <geguileo> We really want them merged soon
16:36:29 <geguileo> So they are thoroughly tested
16:36:41 <geguileo> Some are simple, but others are a little more complex
16:37:01 <e0ne> we have to make it high priority for reviews
16:37:09 <smcginnis> geguileo: Is the order of the patch links in the agenda relevant?
16:37:19 <e0ne> to be sure that they will be landed in M-2
16:37:21 <e0ne> IMO
16:37:32 <geguileo> smcginnis: For my patches it is
16:37:37 <geguileo> Because they are in a chain
16:37:44 <smcginnis> geguileo: OK, thanks.
16:37:54 <geguileo> But bluex has created a new one and that can be reviewed on its own
16:38:17 <smcginnis> geguileo: I did notice you've done a great job keeping the blueprint whiteboard organized.
16:38:33 <geguileo> I did because it was a mess of patches otherwise  XD
16:38:41 <smcginnis> geguileo: Yeah, definitely. ;)
16:38:46 <dulek> geguileo: How many more patches will it be?
16:38:48 <e0ne> geguileo: your patches requires new sqlalchemy. do you know when it will be released and added to global-requirements?
16:39:06 <smcginnis> #link https://blueprints.launchpad.net/cinder/+spec/cinder-volume-active-active-support
16:39:06 <geguileo> e0ne: I'm not sure when that will be released
16:39:12 <dulek> geguileo: Maybe we can prioritize APIs needing the patches?
16:39:13 <geguileo> e0ne: I'll ask and add it to the BP
16:39:29 <e0ne> geguileo: what if it will be released after Mitaka?
16:39:32 <geguileo> I have split in the BP patches that require the new version
16:39:36 <geguileo> And those that don't
16:39:43 <e0ne> geguileo: how much does it affect us?
16:39:55 <geguileo> e0ne: Then only half the patches will merge
16:40:15 <smcginnis> geguileo: How critical are those?
16:40:45 <geguileo> e0ne: Extend, volume_upload_image, migrate, retype, backups
16:41:19 <e0ne> geguileo: could you please mark somehow patches in a commit message that they require new sqlalchemy?
16:41:31 <geguileo> e0ne: They are ordered in the BP
16:41:37 <e0ne> geguileo: thanks
16:41:42 <geguileo> e0ne: Under: *Ready for review but need a new SQLAlchemy release (version 1.0.10):*
16:41:47 <e0ne> one more question
16:41:49 <scottda> Is zzzeek around to answer about SQLalchemy release?
16:42:01 <e0ne> do we need new oslo.db release for it?
16:42:14 <geguileo> scottda: I'll ping him on #sqlalchemy-devel
16:43:00 <geguileo> e0ne: Not that I know
16:43:08 <geguileo> e0ne: We just need to update our requirements
16:43:22 <e0ne> geguileo: it's good. less dependencies is better
16:43:59 <geguileo> That's all I wanted to say
16:44:09 <smcginnis> geguileo: OK, thanks!
16:44:11 <e0ne> geguileo: thanks for you work on this!
16:44:29 <smcginnis> #topic Open discussion
16:44:47 <smcginnis> So call out to all reviewers - if we could get some focus on those patches...
16:44:59 <smcginnis> I'd also love to see some focus on the new drivers.
16:45:13 <smcginnis> I'd like to avoid the end of milestone crunch if at all possible.
16:45:23 <smcginnis> There are a few out there that have been waiting for feedback.
16:45:24 <e0ne> smcginnis: do you have a list of patches with new drivers?
16:45:26 <erlon> smcginnis: so, about Hitachi CIs
16:45:34 <kmartin> before and after gerrit goes down for maintenance
16:45:37 <smcginnis> If anyone has time to take a look, any reviews help.
16:45:41 <dulek> So if it's open - do we care to support Keystone V2 API? We've got regression in Liberty - quota calls work only with Keystone V3.
16:45:51 <e0ne> kmartin: good point :)
16:45:59 <smcginnis> e0ne: Not yet, but maybe I'll add a bit to the spec etherpad just to capture them somewhere easy to find.
16:46:26 <smcginnis> erlon: They've been contacted and supposedly working on it. We'll see.
16:46:33 <jungleboyj> smcginnis: Do we have a list of the drivers that are waiting for review somewhere?
16:46:34 <erlon> smcginnis: we are going throug a series of infra upgrades, so, those last weeks we needed to stop our CIs for some time
16:46:52 <jungleboyj> Oops, yeah, what e0ne asked.
16:46:53 <smcginnis> jungleboyj: See response to e0ne. :)
16:47:06 <e0ne> dulek: IMO, we can depricate keystone api v2 in M or early in N
16:47:19 <jungleboyj> smcginnis: Done.  :-)
16:47:22 * jungleboyj is slow today.
16:47:24 <e0ne> AFAIK, cinder works well with keystone api v3 only
16:47:27 <smcginnis> erlon: No worries. I think we all have some down time. As long as it doesn't stretch on too long.
16:47:29 <erlon> smcginnis:  we, will still have those issues at least until early January, when we finish all infra upgrade
16:47:30 <smcginnis> jungleboyj: ;)
16:47:30 <timcl> NetApp FlashRay driver patch is https://review.openstack.org/253695
16:47:40 <e0ne> jungleboyj: did you get your morning coffee?
16:48:08 <smcginnis> e0ne: Do you know if they are trying to push folks to v3. That would be my assumption.
16:48:31 <dulek> e0ne: If we'll deprecate it we still should support it for some releases.
16:48:35 <jungleboyj> e0ne: A whole pot plus another big cup.  Hasn't fixed my issues yet.
16:48:37 <smcginnis> e0ne: Or v3 is not deployed commonly enough that we should do v2 until it is?
16:48:43 <e0ne> smcginnis: v2 is deprecated, keystone team wants to use v3 only
16:48:48 <dulek> e0ne: Keystone v2 won't be gone until M+4 release.
16:49:06 <e0ne> dulek: TBH, it will be supported forever:(
16:49:10 <smcginnis> e0ne: Then we should probably support that and go with v3. Unless that's an issue for operators I suppose.
16:49:12 <hdaniel> armax, ajo: guys, do you mind looking at  https://review.openstack.org/#/c/254224/  (the rbac-qos-spec) ?
16:49:18 <e0ne> dulek: openstack can't drop old APIs
16:49:25 <e0ne> it was discussed at summit
16:49:55 <e0ne> smcginnis: we can start with mails to ops and dev MLs
16:50:01 <dulek> e0ne: Huh, would need to check again, but some guy from Neutron told me that V2 is deprecated in Mitaka and will be gone in R.
16:50:19 <e0ne> dulek: I tried to delete v1 from Cinder
16:50:25 <smcginnis> dulek: That may be what they would like to happen, but probably not. :)
16:50:27 <dulek> e0ne: Yeah, I know. ;)
16:51:14 <e0ne> #link https://etherpad.openstack.org/p/mitaka-deprecation-policy
16:51:26 <dulek> So are we saying that Cinder supports V3 only, or should we fix api.contrib.quotas to be compatible with V2?
16:51:36 <e0ne> How to drop an API [version or feature]?
16:51:36 <e0ne> Don't. Deprecation is separate from entirely dropping support for APIs, though. (Deprecated, but never removed)
16:51:43 <dulek> e0ne: I see…
16:52:10 <smcginnis> dulek: How different are v2 and v3? Is it much of an effort to support both?
16:52:17 <e0ne> we can say that we support features A, B, C only with keystone API v3
16:52:22 <e0ne> heat did the same
16:52:32 <dulek> project_id and tenant_id and nested quotas.
16:52:36 <e0ne> smcginnis: they are very different
16:52:45 <dulek> That's the differences from what I know.
16:52:48 <smcginnis> Darn
16:53:16 <dulek> Hm, but hey, will anybody deploy Keystone with only V2 if V2 is deprecated?
16:53:32 <smcginnis> dulek: They shouldn't, but they probably will. ;)
16:53:45 <dulek> smcginnis: :>
16:53:50 <smcginnis> dulek: Actually, I think the concern would be existing deployments, not new ones.
16:53:58 <e0ne> dulek: a lot of vendors and operators use depricated APIs:(
16:54:05 <smcginnis> It would be great to have some ops input.
16:54:17 <winston-d_> yup, we are still running keystone v2, only.
16:54:19 <winston-d_> with Juno
16:54:20 <e0ne> smcginnis: +1. I mentioned it earlier
16:54:33 <smcginnis> e0ne: Yep :)
16:54:45 <dulek> Okay, so I'll get someone on my team to look at it. If it's easy enough we'll propose a patch and maybe a backport to Liberty.
16:54:55 <smcginnis> winston-d_: Thanks. So it would be an issue if we only supported v3.
16:54:56 * dulek wrote bugport at first…
16:55:05 <smcginnis> :)
16:55:18 <smcginnis> dulek: Sounds good.
16:55:28 <smcginnis> Oh, I forgot an announcement.
16:55:43 <smcginnis> I recently found this cross project spec was approved:
16:55:45 <smcginnis> #link http://specs.openstack.org/openstack/openstack-specs/specs/no-downward-sql-migration.html
16:55:51 <e0ne> smcginnis: we can support only V3 for nested quotas
16:55:59 <e0ne> smcginnis: great news!
16:56:09 <smcginnis> If anyone wants to look at that for cinder, have at it.
16:56:26 <e0ne> smcginnis: it will be harder to tests RPC versioned objecs, but I like this idea
16:56:39 <e0ne> smcginnis: will do it
16:56:46 <dulek> e0ne: Will it? I don't see the problem.
16:56:50 <smcginnis> e0ne: Awesome!
16:56:58 <dulek> e0ne: I'm aware of nested quotas incompatibility. We'll look at it
16:57:01 <e0ne> smcginnis: I proposed it some time ago, so I have to finish it
16:57:24 <e0ne> dulek: DuncanT had a consern about dropping downgrade migrations
16:57:41 <smcginnis> Alright, let's continue any discussions in #openstack-cinder
16:57:43 <smcginnis> Thanks everyone.
16:57:57 <smcginnis> #endmeeting