17:10:32 #startmeeting nova_cells 17:10:33 Meeting started Wed May 24 17:10:32 2017 UTC and is due to finish in 60 minutes. The chair is dansmith. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:10:34 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:10:37 The meeting name has been set to 'nova_cells' 17:10:38 sorry, I got sucked into something 17:10:53 #topic bugs/testing 17:11:22 i have an (old) bug 17:11:23 once we get this patch in: https://review.openstack.org/#/c/458537 17:11:34 we will be able to merge the devstack patch and be testing on multicell all the time 17:11:49 so just FYI 17:11:53 mriedem: your bug, go 17:12:00 crap too fast 17:12:15 https://review.openstack.org/#/c/420461/2 17:12:29 is the newton backport of the fix for defaulting cell0 connection name to nova_cell0 rather than nova_api_cell0 17:12:38 aye, cool 17:12:45 i know i tried this back when we realized it was a problem and fixed it in ocata, 17:12:59 grenade job is failing, i'm not entirely sure why, but before i dig into this, 17:13:13 i wanted to talk amongst friends if this is a good idea for stable/newton at this point 17:13:27 the reason i revived it is i think people are hitting this, 17:13:32 it only affects peoplethat haven't been through the process yet so I think it's okay? 17:13:45 they are doing this in newton and getting nova_api_cell0 and then that's going to cause issues when going to ocata, and we expect nova_cell0 17:13:55 dansmith: that was my justification as well 17:14:14 wait 17:14:18 I don't think that is an issue, 17:14:18 meaning the ocata code looks for something named "nova_cell0" or? 17:14:24 i thought either someone in irc or in ask.o.o hit issues with this b/c the db they created didn't match what simple_cell_setup created, and then nova-manage db sync failed 17:14:27 because we look up cell0 in the database 17:14:43 melwitt: no i don't think so explicitly 17:14:51 but our docs say things like nova_cell0 17:14:58 or 'main cell db name plus _cell0' 17:15:06 so i think people are creating a nova_cell0 db, 17:15:12 yeah, just being consistent is good, 17:15:13 but anyone that is set on newton is fine when they move to ocata I think 17:15:14 running simple_cell_setup, which creates the wrong connection in the api db cell_mappings, 17:15:18 run nova-manage db sync, and that fails 17:15:41 ok so i can continue working on figuring out the grenade issue, 17:15:45 agreed being consistent would be good 17:15:51 if i sit on the can long enough we'll drop grenade in newton b/c mitaka is eol 17:16:02 heh 17:16:05 but with tony out of a job that's not going away it seems 17:16:14 maybe that's the answer to LTS, never hire tony 17:16:18 hah 17:16:27 but moving on 17:16:37 fyi, the nova/context.py cache stuff merged to ocata and newton for the increased db connections bug 17:16:37 any other bugs/testing things? 17:16:44 cool 17:16:47 the backports 17:17:47 anything else? 17:18:10 oh it's in newton now, 17:18:17 that was why i needed to sort out this other thing, 17:18:31 because the special db creds fix is dependent on this cell0 naming fix, 17:18:36 and i wanted to get those out in the same stable release 17:18:39 if possible 17:19:00 but i spent all morning in customer support 17:19:04 so i'll do it this afternoon 17:19:34 moving on? 17:20:03 yeah 17:20:25 #topic open reviews 17:20:30 I have those open I linked above 17:20:36 melwitt still has quotas stuff up 17:20:46 mriedem: still some of the host api stuff remaining, right? 17:20:48 i plan on getting back on the base quotas patch again 17:20:57 yes, sec 17:20:59 yeah, I pushed the update to address mriedem's comments on the first patch, last night 17:21:11 https://review.openstack.org/#/c/461519/ first two there 17:21:16 Vek has been on top of the rebases and +2s 17:21:20 so i just need one of your two a-holes to +W tohse 17:21:34 ha, 17:21:38 *you two a-holes 17:21:39 not your two 17:21:45 heh 17:21:48 although that would be an interesting development 17:22:17 mriedem: I have a tab open for those and will look when we're done here 17:23:22 word to your mother 17:23:40 anything else here? 17:23:50 not from me 17:23:54 #topic open discussion 17:24:18 I got nothing (that I haven't already mentioned) 17:24:31 melwitt had a possible thing last night, 17:24:43 about creating servers in a group in separate cells, 17:24:51 maybe for anti-affinity, 17:24:58 and then needing to iterate the cells to determine membership quota? 17:25:15 well, like anything else we have to traverse cells for I would think 17:25:45 yeah, another way I thought of is if you created an instance in group A and then the cell it's in no longer has capacity, the second instance you boot into group A could go in another cell. point being, I just have to make sure the count for that goes across cells 17:26:03 yeah 17:26:23 my side thought on this, was it'd be good to know if we have any testing for this, 17:26:28 I was getting confused yesterday bc I forgot server groups are in the API DB, so I was thinking "how can I get all the groups" but that's not an issue bc API DB. so disaster averted 17:26:32 i think we could contain a test for this within nova functional 17:26:36 stubbing out multiple cells 17:26:51 melwitt: aye 17:27:30 mriedem: that's probably possible, though I dread doing it 17:27:31 i'm assuming we have some kind of filter we could use for an internal test to say a host is all used up once a single instance is on it 17:27:50 so create two cells, one host per cell, and a group with 2 instances 17:27:54 but yeah, easier said than done 17:28:51 hopefully it would be easy :P 17:28:57 okay well, sounds like that's sorted, aside from tests 17:29:12 anything else? 17:29:33 nope 17:29:44 nay 17:29:46 * dansmith waits for the nope to be seconded 17:29:47 woot 17:29:50 #endmeeting