17:00:13 #startmeeting nova_cells 17:00:14 Meeting started Wed Jan 6 17:00:13 2016 UTC and is due to finish in 60 minutes. The chair is alaski. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:15 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:17 The meeting name has been set to 'nova_cells' 17:00:53 anyone around today? 17:01:18 One person at least. 17:01:20 o/ 17:01:26 woo 17:01:36 o/ 17:01:39 o/ 17:01:47 tbh I almost messed up with the two odd weeks 17:02:14 #topic Open Reviews 17:02:35 to start I just want to call out https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking which should have open reviews listed 17:02:46 and any that come up should be added there 17:02:54 secondly mriedem had a thing 17:03:06 Should we propagate the soft-delete mix-in to the Flavors tables in the API DB? https://review.openstack.org/#/c/201606/ 17:03:08 o/ 17:04:20 the primary folks for that code aren't here today 17:04:23 alaski: so you said there was some api implication to that? 17:04:41 right 17:04:49 currently soft deleted flavors are still visible in the api 17:04:51 can you explain that in the change i guess? 17:04:55 if you know the id 17:05:03 sure 17:05:40 I think we should strive to not have flavors be soft-deletable in the api db 17:05:51 i agree, 17:05:54 Sounds good. 17:05:59 i don't know the details on the api thing, is it just the flavors api? 17:06:09 yeah 17:06:13 my concern with soft deletable flavors in the api db is we don't have any way to clean them up 17:06:18 via nova-manage or other 17:06:40 i'd like the ops people to be on board with that though since it's a semi api change in behavior 17:06:41 for cells v2 17:06:51 yeah 17:06:54 so maybe a ML is necessary 17:07:03 it is a bit murky because archived flavors wouldn't show up in the db 17:07:10 so it's sort of undefined territory 17:07:14 s/db/api/ 17:07:20 Doesn't very 'semi' to me. If anyone is accessing via id its kind of using 'hidden' behavior. 17:07:23 well, and no soft delete is still murky 17:07:33 i had a ML thread on that subject which pittled out 17:07:47 i was trying to figure out how to explain in the nova devref why no soft delete anymore, but never really got there 17:07:51 doffm: agreed 17:08:35 mriedem: that would be good to have eventually 17:08:43 but yeah, ML sounds good 17:09:02 #action alaski send out ML post on removing soft-delete from flavors, and implications for API 17:09:39 #topic Open Discussion 17:10:05 looks like I should send out a ML post about the meeting times, since it's easy to have mixed them up 17:10:13 ccarmack: how is https://review.openstack.org/#/c/225199/ coming? 17:10:24 i see the thing here https://github.com/openstack/nova/blob/master/nova/api/openstack/compute/flavors.py#L54 17:10:30 I just had a general question - are there remaining work items for cells v2 or is everything in the etherpad/ 17:10:37 one sec doffm 17:10:50 ccarmack: there are remaining items 17:10:57 https://github.com/openstack/nova/blob/master/nova/compute/flavors.py#L222 17:11:03 defaults to read_deleted='yes' 17:11:06 there's a series started at https://review.openstack.org/#/c/254434/ which is not complete 17:11:07 alaski: on that topic, what is our roadmap? 17:11:17 since flavors are almost done 17:12:53 mriedem: yeah. I wonder if we should file a bug for that behaviour? I'm not sure it's intentional 17:13:53 vineetmenon: the only open specs currently are the one related to the series I linked above, and one that melwitt is working on re: db connection switching 17:13:55 alaski: yeah idk, might be worth asking dansmith if he knows/remembers anything 17:14:45 yeah 17:14:51 alaski: as noted before, if you archive the nova db, that will return a 404 on a previously deleted flavor anyway 17:15:08 so i don't really see much of a huge api regression in not having soft deletable flavors in the api db 17:15:13 That looks like a bug. It would be confusing to get back a flavor that you just deleted. 17:15:22 we'll just tell people their admin turned on an archive cron that goes every minute :) 17:15:27 doffm: but that's the desired behavior 17:15:29 heh 17:15:54 I think a ML discussion to make sure we all understand the consequences is the best we can do 17:16:07 aside from actually keeping the behavior 17:16:11 mriedem: that's why I think we can make a case to change the behavior, because there's no guarantee that the flavor will be there 17:16:31 even now 17:16:50 1 delete flavor. 2 show flavor to check its deleted. 3 get it back and look confused. 17:17:09 alaski: right, admins could be purging flavors any time they delete them now 17:17:16 doffm: it's not that simple 17:17:27 doffm: the instance contains a permalink to the flavor it was created with, for all time 17:18:00 dansmith: I see. 17:18:43 now I recall, there was some discussion on whether we need to guarantee that link stays valid. which is not currently guaranteed 17:18:44 do we show that flavor id in the server get call? 17:18:57 mriedem: yeah 17:19:04 i'm just trying to figure out the use case 17:19:14 mriedem: so you can follow the link to the flavor 17:19:21 which is a pretty common api thing, AFAIK 17:19:21 so boot instance with flavor 1, delete flavor 1, i want to resize/migrate instance and want to know what i'm resizing from 17:19:42 or just know what flavor it is 17:19:44 so need the old (now deleted) flavor to do so 17:19:46 mriedem: not only that, I just may want to look at my instance's flavor's extra specs, or do billing or something 17:19:48 sure 17:19:57 anyway, 17:20:04 heh, maybe we shouldn't archive flavors :) 17:20:17 we know we just have to bite this bullet, I think, so let's cast the net to the ML and see who complains 17:20:20 I expect nobody 17:20:55 there is a path forward with flavor data in instance_extra now. but that would only affect api > some microversion if we expose that 17:20:58 yeah... 17:21:02 i was going to just say that :) 17:21:25 but for now, ML 17:21:30 alaski: right, we probably need to expose it in the instance itself to satisfy that need 17:22:29 dansmith: yeah. that may be better in general because it removes an extra api lookup to get at flavor info 17:23:14 yeah 17:23:26 ccarmack: did you want to discuss https://review.openstack.org/#/c/225199/ now? 17:23:52 alaski: yea, 2 of the 3 patches have workflow +1, only the tempest patch needs work. 17:24:23 Should I still work on it? Its not directly related to cells 17:24:54 alaski, willl cells v2 support security groups 17:25:16 ccarmack: yes, cellsv2 will support everything 17:25:33 that's a primary reason for doing it 17:25:56 ccarmack: it also cleans up the tempest cells rc in nova 17:26:10 and so that we don't have to blacklist new tests that use security groups and break the cells job 17:26:15 since it would be configurable in tempest 17:26:40 alaski: I probably should still finish the patch because it supposed to be a general config option 17:26:40 the icky part is how pervasive it is in tempest 17:27:00 ccarmack: you probably have to get with the qa team on that one, probably get it in their meeting agenda 17:27:01 was just looking it over, it seems worth having 17:27:11 * dansmith has to run to another meeting 17:27:16 o/ 17:27:22 o/ 17:28:01 mriedem: the last set of comments took me aback, as you say I should get with the qa team 17:28:39 ccarmack: http://eavesdrop.openstack.org/#QA_Team_Meeting 17:28:59 thanks mriedem 17:29:57 anything else for today? 17:30:15 ya 17:30:27 is there anything documented for migration plans from v1 to v2? or too early? 17:30:36 there was an ops thread from mikal last july 17:31:20 I don't think there's anything documented yet 17:31:27 it's not in the devref at least 17:31:44 do you have a rough idea? 17:31:48 it's not too early to get the broad outline of it laid out though 17:31:58 yeah 17:31:59 yeah, I have a vision for how it could work 17:32:04 I'll start writing that down 17:32:11 another question, 17:32:24 #action alaski write down rough migration plan for v1 to v2 17:32:46 at what point do we think we can stand up a deployment with cells v2? like is there a minimum goal that we need to get to, even if there is some missing stuff? 17:33:19 trying to figure out the 'roadmap' like was mentioned earlier 17:33:49 in my mind after the series at https://review.openstack.org/#/c/254434 (incomplete at the moment) lands we'll be in cellsv2 17:34:09 only for booting an instance, but the basic pieces will be in place 17:34:16 like the new db needing to be setup 17:34:49 maybe a better link is https://blueprints.launchpad.net/openstack/?searchtext=cells-scheduling-interaction 17:35:05 yeah i was going to say, should those all be linked to that bp? 17:35:20 yes 17:35:26 alaski: Once that series is in place then cell0 will be ready? Or multiple cells? 17:35:45 neither 17:36:07 which reminds me, we do have the cell0 spec still open 17:36:20 somehow I keep forgetting that one 17:36:39 Ok, i see the work on cell0 spec. 17:37:35 the series above just starts filling out the instance_mapping for newly created instances 17:37:36 link? 17:37:50 and then queries that when listing/showing instances 17:38:06 https://review.openstack.org/#/c/238665/ 17:38:28 oh merged, i see 17:38:42 so after the above series Nova will think it is scheduling to a cell 17:38:55 then we need to build out from there 17:40:18 ok, i'm just updating the review etherpad 17:40:19 with notes 17:40:21 i don't have anything else 17:40:23 cool 17:40:59 I guess for current roadmap I would say that the goal this cycle is to have instance boots using the cells paradigm 17:41:07 and the flavor migration 17:41:20 so https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:master+topic:bp/cells-scheduling-interaction and https://review.openstack.org/#/c/213041 17:41:50 alright 17:41:51 yeah. and cell0 which is related to the scheduling one 17:41:56 gonna have this all merged by feb :) ? 17:42:17 heh. as much as I can :) 17:42:38 the more I work on it the more of a mess I feel I need to untangle to get this working 17:42:49 i guess i have to start reviewing some code 17:43:15 wrap up? 17:43:22 yep 17:43:23 * mriedem is hangry 17:43:37 unless someone speaks up in the next 5 seconds 17:43:44 #endmeeting