16:00:30 <jungleboyj> #startmeeting Cinder 16:00:30 <openstack> Meeting started Wed Jan 23 16:00:30 2019 UTC and is due to finish in 60 minutes. The chair is jungleboyj. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:34 <openstack> The meeting name has been set to 'cinder' 16:00:41 <jungleboyj> Courtesy ping: jungleboyj diablo_rojo, diablo_rojo_phon, rajinir tbarron xyang xyang1 e0ne gouthamr thingee erlon tpsilva ganso patrickeast tommylikehu eharney geguileo smcginnis lhx_ lhx__ aspiers jgriffith moshele hwalsh felipemonteiro lpetrut lseki _alastor_ whoami-rajat yikun rosmaita enriquetaso 16:00:45 <yikun> hello 16:00:47 <whoami-rajat> Hi 16:00:48 <e0ne> hi 16:00:51 <rosmaita> o/ 16:00:58 <geguileo> hi! o/ 16:00:58 <rajinir> hi 16:00:59 <eharney> hi 16:01:00 <woojay> hello 16:01:00 <jungleboyj> @! 16:01:00 <_pewp_> jungleboyj ( ・_・)ノ 16:01:06 <xyang> hi 16:01:14 <enriquetaso> o/ 16:02:25 <jungleboyj> pretty good showing. Do we have smcginnis as he has a couple of topics. 16:02:41 <walshh_> hi 16:02:48 <jungleboyj> walshh_: Welcome. 16:03:31 <jungleboyj> Hmmm. Ok. Guess we will get started. 16:03:48 <smcginnis> o/ 16:03:51 <jungleboyj> #topic announcements 16:03:55 <jungleboyj> smcginnis: Yay! Welcome. 16:04:04 <jungleboyj> So, announcements ... 16:04:11 <davidsha> o/ 16:04:32 <jungleboyj> We did not get any dissent to the proposal of having yikun and whoami-rajat so they have now been added to the core list. 16:04:48 <rosmaita> congratulations! 16:04:50 <jungleboyj> Sorry that took a little longer to get done but you should now see a +2 option for your reviews. 16:04:59 <smcginnis> Welcome! 16:05:07 <e0ne> whoami-rajat, yikun: welcome! 16:05:21 <jungleboyj> Yes, welcome. Thank you for your commitment as of late. It is great to have you onboard. 16:05:23 <yikun> ha, thanks. :) 16:05:25 <whoami-rajat> jungleboyj: yes, Thanks! 16:05:32 <whoami-rajat> Thanks everyone. 16:06:01 <jungleboyj> So, it is great to grow the team. 16:06:35 <jungleboyj> Also, friendly reminder that our mid-cycle planning continues: 16:06:37 <e0ne> jungleboyj: +1 16:06:39 <jungleboyj> #link https://etherpad.openstack.org/p/cinder-stein-mid-cycle-planning 16:06:50 <erlon_> hey 16:07:09 <jungleboyj> Details on our hotels have been added in the etherpad. 16:07:48 <jungleboyj> Also, there are a number of meet-up sessions that just happening to happen while we are in town so those will be good for us to attend. 16:08:13 <jungleboyj> I think that is all I had for announcements. 16:08:56 <jungleboyj> #topic Default value of backend_url vs tested value 16:08:59 <jungleboyj> smcginnis: 16:09:24 <smcginnis> I just wanted to make sure folks were aware of this and see if anyone had any thoughts if we should change anything. 16:09:45 <smcginnis> Right now, in code our lock coordination through tooz uses local file. 16:10:03 <smcginnis> That is also what you end up with if you do a distro package based install of Cinder. 16:10:08 <smcginnis> Not sure about other deployments. 16:10:09 <e0ne> smcginnis: is it backend_url for locks? 16:10:15 <smcginnis> e0ne: Correct 16:10:40 <smcginnis> The issue is, devstack sets the backend url to use etcd. 16:10:58 <smcginnis> At least by default. It is possible to override that, but I don't see anywhere where we do. 16:11:14 <smcginnis> So all gate testing is using etcd. 16:11:23 <eharney> just using files by default seems right to me -- but i'm not sure why we've moved to only testing etcd in the gate 16:11:24 <smcginnis> So our default settings are not being tested, but that it. 16:11:53 <smcginnis> eharney: My guess is when we declared etcd as an expected service, they wanted to get coverage on that. 16:12:27 <smcginnis> So we can change the devstack default, but seems like we would want to have both local file and etcd tested. I'm just not sure where to divide that up. 16:12:45 <smcginnis> Or if it's really worth changing a bunch here. 16:12:56 <smcginnis> So just wanted to point that out in case anyone else has any thoughts or ideas. 16:13:04 <smcginnis> Since not tested equals broken 16:13:09 <jungleboyj> :-) 16:13:11 <eharney> we could consider switching the lio-barbican job to not use etcd, since it's been serving as a place to test "the other option" for a few things already 16:13:18 <eharney> assuming it's easy to turn off 16:13:31 <smcginnis> Yeah, just a flag. 16:13:35 <jungleboyj> eharney: That is a good idea. 16:13:37 <smcginnis> That might be best. 16:13:53 <e0ne> eharney: great idea. I like this option 16:13:56 <smcginnis> It didn't seem like the "correct" place to do that, but it is somewhere that we could easily switch it. 16:14:29 <e0ne> smcginnis: the other option is to add one more job with tempest and local locks 16:14:34 <jungleboyj> Maybe that should be the 'other options' job ? 16:14:55 <jungleboyj> e0ne: With the current infra state do we want to add more jobs? 16:14:56 <smcginnis> e0ne: Yeah, that's another possibilty. Seemed overkill though. We already have so many jobs. 16:15:03 <jungleboyj> smcginnis: ++ 16:15:11 <eharney> it also seems like overkill to deploy etcd in a bunch of jobs that don't really need it... 16:15:34 <smcginnis> It's a "base OpenStack service" :) 16:15:34 <e0ne> jungleboyj: I don't want more jobs too. I just pointed on another option 16:15:59 <jungleboyj> e0ne: Understood. :-) No worries 16:16:04 <e0ne> :) 16:16:15 <smcginnis> Well, if folks are OK with the barbican_lio idea, I can try to put up a patch later to set this flag. 16:16:27 <smcginnis> Then at least we have *something* covering that config scenario. 16:16:33 <eharney> https://review.openstack.org/#/c/632773/ will see what happens 16:16:40 <jungleboyj> smcginnis: ++ 16:17:04 <jungleboyj> eharney: What took you so long? 16:17:09 <smcginnis> eharney: Awesome! 16:17:25 <smcginnis> jungleboyj: OK, that's enough for now unless anyone else has questions. 16:17:36 <jungleboyj> #link https://review.openstack.org/#/c/632773/ 16:17:52 <jungleboyj> Ok. Thanks for catching that smcginnis 16:18:12 <jungleboyj> #topic Alembic instead of sqlalchemy-migrate 16:18:16 <jungleboyj> smcginnis: You again. 16:18:26 * jungleboyj defers to shadow PTL 16:19:00 <e0ne> I like this idea in general but to we have some easy way to move existing migrations to alembic? 16:19:05 <smcginnis> So this came up in the nova channel. 16:19:33 <smcginnis> zzeeek (I may have missed some z's and e's there) is the maintainer of both and had actually deprecated sqlalchemy-migrate years ago. 16:19:43 <smcginnis> The direction has been to get off of that and use alembic. 16:20:01 <smcginnis> I don't think that was communicated too widely. Or at least I wasn't really aware of that. 16:20:15 <smcginnis> So he would like to stop maintaining it, but there are still a few OpenStack services using it. 16:20:15 <jungleboyj> I had no idea. 16:20:37 <smcginnis> I know glance had done the migration, so at least there are some examples of it being done. 16:20:53 <smcginnis> And he sounded very willing to help with doing the migration. 16:20:54 <rosmaita> we had 2 people working on it for about 1.5 cycles 16:21:09 <smcginnis> rosmaita: Oh wow, I was hoping it was less effort than that. :/ 16:21:27 <rosmaita> well, we were doing the rolling upgrade stuff at the same time 16:21:36 <smcginnis> Well, if we need to get off of it, we probably should get started then if it's going to take that long. 16:21:50 <smcginnis> Ah, so it might be easier for us then. That's good. 16:21:53 <rosmaita> and it was maybe ocata? hopefully by now it's a bit easier 16:22:15 <smcginnis> I was hoping he would just say "run this tool and it does it for you", but no such luck. ;) 16:22:27 <e0ne> what is a procedure to re-write current migrations? 16:22:36 <smcginnis> There are guides out there. 16:23:18 <smcginnis> Maybe someone is more of an expert, but my impression is you now have UUID migration scripts instead of numbered and it's a little more flexible on how you manage those. 16:23:23 <smcginnis> rosmaita: Any experience there? 16:24:04 <rosmaita> we didn't duplicate all migrations, just started with a liberty db (this was in ocata) and did the few migrations from there 16:24:17 <smcginnis> That makes sense. 16:24:34 <rosmaita> our migration scripts are named by release (which actually may be a problem) 16:24:51 <smcginnis> And I was collapsing the cinder migrations for awhile too so we could just start with a base supported schema and not have to do step by step since Folsom anyway. 16:25:03 <rosmaita> that's the way to go 16:25:04 <e0ne> is there any test helper in oslo.db to test alembic migrations? 16:25:22 <rosmaita> yeah, there are some mixins 16:25:29 <jungleboyj> smcginnis: ++ 16:25:40 <e0ne> rosmaita: great 16:26:15 <rosmaita> we carried both old and new migration scripts for one release, "just in case", but it didn't turn out to be necessary (as far as i've heard) 16:26:39 <rosmaita> my impression is that alembic is really solid 16:26:47 <smcginnis> So if we agree, I think we need to get started scoping this work. We can probably enlist zzzeeeeeeeeeek for some help. Not sure if we want to write a spec, but we should probably have a blueprint to at least track it all. And any volunteers to lead the effort would help. 16:27:18 <smcginnis> rosmaita: Unrelated, but that reminds me. We can probably clean up that sqlalchemy-migrate stuff out of the glance repo. 16:27:20 <rosmaita> looks like a good midcycle topic 16:27:25 <smcginnis> rosmaita: ++ 16:27:28 <e0ne> smcginnis: +1. I'll take a look on it if I can help with this effort 16:27:36 <smcginnis> e0ne: Great! 16:27:37 <jungleboyj> rosmaita: ++ 16:27:53 <jungleboyj> I have some DB experience. Will help if I can. 16:28:10 <yikun> I can also help it. :) 16:28:12 <smcginnis> Let's do a little research and regroup at the midcycle to hammer out a plan. 16:28:22 <rosmaita> sounds good 16:28:23 <e0ne> smcginnis: +1 16:28:25 <jungleboyj> smcginnis: ++ 16:28:38 <smcginnis> That's all from me then. 16:28:40 <whoami-rajat> smcginnis: count me in too. 16:28:59 <luizbag> I could help too 16:29:01 <jungleboyj> smcginnis: Cool. Thank you. 16:29:08 <smcginnis> Awesome, thanks everyone. 16:29:23 <jungleboyj> #topic Cinderlib 16:29:30 <jungleboyj> geguileo: You here? 16:29:37 <geguileo> jungleboyj: yup 16:29:50 <jungleboyj> Cool. The floor is your's. 16:29:59 <geguileo> I just wanted to ask for reviews on the cinderlib patces 16:30:02 <geguileo> patches 16:30:15 <geguileo> thought right now the gate seems to be failing for unrelated issues 16:30:34 <geguileo> I also wanted to know if anybody had any questions related to cinderlib 16:30:50 <geguileo> I know that hemna started looking at it and had a couple... 16:30:52 <jungleboyj> hemna: Did the other day. 16:31:07 <jungleboyj> He was wondering why the DB was there. 16:31:22 <geguileo> jungleboyj: I answered him aftewards on the channel, but he was away 16:31:34 <jungleboyj> geguileo: Ok. Were my answers close? 16:31:34 <geguileo> in case anybody is wondering the same question 16:31:49 <geguileo> jungleboyj: I don't remember XD 16:32:01 <jungleboyj> *sad trombone.wav* 16:32:14 <e0ne> jungleboyj: :) 16:32:16 <geguileo> I don't have a great memmory XD 16:32:22 <jungleboyj> Ok. 16:32:22 <geguileo> basically, the thing is that cinderlib implements a persistence plugin system 16:32:42 <geguileo> so you can either keep the metadata in memory and then the user of cinderlib can store this data wherever they want 16:32:50 <geguileo> using the json serialization mechanism 16:33:02 <geguileo> or they can use a plugin to store it in a DB (included plugin) 16:33:07 <geguileo> or write their own plugins 16:33:13 <geguileo> like I did for the Ember-CSI project 16:33:26 <geguileo> where I store the Cinder metadata into CRDs in the k8s deployment 16:33:30 <jungleboyj> Ok. Makes sense. 16:33:42 <jungleboyj> I roughly said that to hemna 16:33:50 <geguileo> jungleboyj: thanks! :-) 16:34:11 <geguileo> I know that smcginnis also had a look at some of the patches and made some suggestions 16:34:19 <jungleboyj> Anyone else have questions? 16:34:27 <geguileo> jungleboyj: I have a question 16:34:36 <geguileo> When is the limit to get these patches merged? 16:34:47 <jungleboyj> I would say milestone-3 16:35:09 <geguileo> ok 16:35:23 <jungleboyj> smcginnis: You agree? 16:36:27 <jungleboyj> I mean, as per our processes it can't go in any later than that. 16:36:44 <jungleboyj> It isn't really a driver so I hadn't enforced ms-2. 16:37:03 <jungleboyj> It is just a tech preview but we don't want to put anything in later than ms-3. 16:37:07 <eharney> sounds reasonable to me 16:37:17 * jungleboyj is thinking outloud. 16:37:37 <smcginnis> Yeah, sounds good. 16:37:43 <jungleboyj> So, milestone-3 sounds to be the answer. 16:37:49 <geguileo> thanks 16:38:04 <jungleboyj> geguileo: Thanks. Sorry I haven't reviewed everything yet. 16:38:08 <jungleboyj> I will work on that. 16:38:15 <geguileo> jungleboyj: thanks!!! 16:38:39 <jungleboyj> Ok. So that is all from you geguileo ? 16:38:44 <geguileo> yup 16:38:53 <jungleboyj> Ok. That was all we had on the agenda. 16:39:00 <jungleboyj> #topic OpenDiscussion 16:39:10 <jungleboyj> Anything else to talk about today? 16:39:16 <rosmaita> i have something 16:39:23 <jungleboyj> rosmaita: 16:39:26 <jungleboyj> Go for it. 16:39:32 <rosmaita> i need some stable cores to take a look at https://review.openstack.org/#/c/629463/2 16:39:41 <rosmaita> it's a squash of 4 cherry picks backported from rocky to queens 16:39:49 <rosmaita> i explain in the commit message why i did it like that 16:39:58 <rosmaita> (though my practical reason is that this is going to have to go into pike, too) 16:40:02 <jungleboyj> rosmaita: I saw that earlier and was waiting for the check to pass before looking closer. 16:40:07 <rosmaita> but i can do 4 separate cherry picks if that's preferable 16:40:11 <smcginnis> Makes sense. I'll take a look at it. 16:40:15 <rosmaita> jungleboyj: it has passed, thanks for the recheck 16:40:23 <jungleboyj> rosmaita: Yep. 16:40:38 <smcginnis> Can it easily be separated into separate changes? Or does it need to be together to make tests pass? 16:40:40 <jungleboyj> I thought we had talked about that before and agreed to do the squashed patch. 16:41:32 <rosmaita> yes, we had discussed it on the bug 16:41:43 <smcginnis> Yeah, this looks good to me. 16:41:55 <rosmaita> ok, cool 16:42:15 <jungleboyj> If it were more lines of code I would be worried but I think it is ok once it passes tests. 16:42:57 <jungleboyj> Anything else on that? 16:43:12 <rosmaita> nope, that's all from me ... thanks! 16:43:22 <jungleboyj> rosmaita: Thanks. 16:43:29 <jungleboyj> Any other topics? 16:43:40 <whoami-rajat> jungleboyj: i've a request 16:43:47 <jungleboyj> whoami-rajat: Go ahead. 16:44:52 <whoami-rajat> if anyone has time to look into this bug https://bugs.launchpad.net/cinder/+bug/1811663 16:44:53 <openstack> Launchpad bug 1811663 in Cinder "Gate failure : AssertionError: Lists differ: [] != [<Thread(tpool_thread_0, started daemon 14[1123 chars]56)>]" [Undecided,In progress] - Assigned to Rajat Dhasmana (whoami-rajat) 16:45:39 <whoami-rajat> it was proposed in the last meeting. 16:45:42 <jungleboyj> #link https://bugs.launchpad.net/cinder/+bug/1811663 16:45:49 * smcginnis sees a Thread and looks in geguileo's direction 16:45:50 <smcginnis> :) 16:46:01 <geguileo> XD 16:46:07 * jungleboyj did that as well 16:46:19 <geguileo> smcginnis: I think I agreed to look into it and I didn't... 16:46:26 <smcginnis> Hah 16:46:26 * jungleboyj hopes geguileo 's threads don't unravel 16:47:11 <geguileo> yup, I agreed in a previous meeting to fix that one... 16:47:19 <geguileo> let's see if I can get it fixed this time 16:47:29 <jungleboyj> :-) 16:47:31 <whoami-rajat> geguileo: Thanks! 16:47:33 <jungleboyj> Yay for my notes! 16:48:05 <jungleboyj> geguileo: Thank you. 16:48:14 <jungleboyj> whoami-rajat: Any other bugs that need attention? 16:48:28 <smcginnis> All of them. 16:48:36 <jungleboyj> :-) 16:48:47 <jungleboyj> Hear no evil, see no evlil. 16:48:50 <jungleboyj> *evil 16:49:23 <whoami-rajat> jungleboyj: not now currently. will prepare some for next week. 16:49:55 <jungleboyj> Ok. Sounds good. No worries. We have been running out of time in meetings lately. 16:50:14 <rosmaita> whoami-rajat: i imisread that as you would create some new bugs in cinder! 16:50:26 * jungleboyj is laughing 16:50:35 <jungleboyj> We have plenty of people doing that. Don't need help. 16:50:47 <yikun> lol 16:50:56 <whoami-rajat> rosmaita: haha, will prepare a list* 16:51:03 <rosmaita> :) 16:51:14 <jungleboyj> Ok. Other topics then? 16:51:55 <jungleboyj> May plan to do a review of the mid-cycle topics when we meet next week. 16:52:32 <jungleboyj> So, please get your topics in there please. 16:53:03 <smcginnis> ++ 16:53:09 <jungleboyj> It has gotten quiet so I think we can wrap up. :-) 16:53:17 <smcginnis> ++ :) 16:53:29 <jungleboyj> :-) +++++++++++++++++++++ 16:53:37 <jungleboyj> Thank you for joining team. 16:53:54 <jungleboyj> Stay warm, stay out of the snow and have a good rest of your week. 16:54:06 <smcginnis> Thanks! 16:54:07 <enriquetaso> bye! 16:54:11 <jungleboyj> Bye all! 16:54:19 <jungleboyj> #endmeeting