17:10:32 <dansmith> #startmeeting nova_cells
17:10:33 <openstack> Meeting started Wed May 24 17:10:32 2017 UTC and is due to finish in 60 minutes.  The chair is dansmith. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:10:34 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:10:37 <openstack> The meeting name has been set to 'nova_cells'
17:10:38 <dansmith> sorry, I got sucked into something
17:10:53 <dansmith> #topic bugs/testing
17:11:22 <mriedem> i have an (old) bug
17:11:23 <dansmith> once we get this patch in: https://review.openstack.org/#/c/458537
17:11:34 <dansmith> we will be able to merge the devstack patch and be testing on multicell all the time
17:11:49 <dansmith> so just FYI
17:11:53 <dansmith> mriedem: your bug, go
17:12:00 <mriedem> crap too fast
17:12:15 <mriedem> https://review.openstack.org/#/c/420461/2
17:12:29 <mriedem> is the newton backport of the fix for defaulting cell0 connection name to nova_cell0 rather than nova_api_cell0
17:12:38 <dansmith> aye, cool
17:12:45 <mriedem> i know i tried this back when we realized it was a problem and fixed it in ocata,
17:12:59 <mriedem> grenade job is failing, i'm not entirely sure why, but before i dig into this,
17:13:13 <mriedem> i wanted to talk amongst friends if this is a good idea for stable/newton at this point
17:13:27 <mriedem> the reason i revived it is i think people are hitting this,
17:13:32 <dansmith> it only affects peoplethat haven't been through the process yet so I think it's okay?
17:13:45 <mriedem> they are doing this in newton and getting nova_api_cell0 and then that's going to cause issues when going to ocata, and we expect nova_cell0
17:13:55 <mriedem> dansmith: that was my justification as well
17:14:14 <dansmith> wait
17:14:18 <dansmith> I don't think that is an issue,
17:14:18 <melwitt> meaning the ocata code looks for something named "nova_cell0" or?
17:14:24 <mriedem> i thought either someone in irc or in ask.o.o hit issues with this b/c the db they created didn't match what simple_cell_setup created, and then nova-manage db sync failed
17:14:27 <dansmith> because we look up cell0 in the database
17:14:43 <mriedem> melwitt: no i don't think so explicitly
17:14:51 <mriedem> but our docs say things like nova_cell0
17:14:58 <mriedem> or 'main cell db name plus _cell0'
17:15:06 <mriedem> so i think people are creating a nova_cell0 db,
17:15:12 <dansmith> yeah, just being consistent is good,
17:15:13 <dansmith> but anyone that is set on newton is fine when they move to ocata I think
17:15:14 <mriedem> running simple_cell_setup, which creates the wrong connection in the api db cell_mappings,
17:15:18 <mriedem> run nova-manage db sync, and that fails
17:15:41 <mriedem> ok so i can continue working on figuring out the grenade issue,
17:15:45 <melwitt> agreed being consistent would be good
17:15:51 <mriedem> if i sit on the can long enough we'll drop grenade in newton b/c mitaka is eol
17:16:02 <dansmith> heh
17:16:05 <mriedem> but with tony out of a job that's not going away it seems
17:16:14 <mriedem> maybe that's the answer to LTS, never hire tony
17:16:18 <dansmith> hah
17:16:27 <mriedem> but moving on
17:16:37 <melwitt> fyi, the nova/context.py cache stuff merged to ocata and newton for the increased db connections bug
17:16:37 <dansmith> any other bugs/testing things?
17:16:44 <dansmith> cool
17:16:47 <melwitt> the backports
17:17:47 <dansmith> anything else?
17:18:10 <mriedem> oh it's in newton now,
17:18:17 <mriedem> that was why i needed to sort out this other thing,
17:18:31 <mriedem> because the special db creds fix is dependent on this cell0 naming fix,
17:18:36 <mriedem> and i wanted to get those out in the same stable release
17:18:39 <mriedem> if possible
17:19:00 <mriedem> but i spent all morning in customer support
17:19:04 <mriedem> so i'll do it this afternoon
17:19:34 <dansmith> moving on?
17:20:03 <mriedem> yeah
17:20:25 <dansmith> #topic open reviews
17:20:30 <dansmith> I have those open I linked above
17:20:36 <dansmith> melwitt still has quotas stuff up
17:20:46 <dansmith> mriedem: still some of the host api stuff remaining, right?
17:20:48 <mriedem> i plan on getting back on the base quotas patch again
17:20:57 <mriedem> yes, sec
17:20:59 <melwitt> yeah, I pushed the update to address mriedem's comments on the first patch, last night
17:21:11 <mriedem> https://review.openstack.org/#/c/461519/ first two there
17:21:16 <mriedem> Vek has been on top of the rebases and +2s
17:21:20 <mriedem> so i just need one of your two a-holes to +W tohse
17:21:34 <mriedem> ha,
17:21:38 <mriedem> *you two a-holes
17:21:39 <mriedem> not your two
17:21:45 <dansmith> heh
17:21:48 <mriedem> although that would be an interesting development
17:22:17 <dansmith> mriedem: I have a tab open for those and will look when we're done here
17:23:22 <mriedem> word to your mother
17:23:40 <dansmith> anything else here?
17:23:50 <mriedem> not from me
17:23:54 <dansmith> #topic open discussion
17:24:18 <dansmith> I got nothing (that I haven't already mentioned)
17:24:31 <mriedem> melwitt had a possible thing last night,
17:24:43 <mriedem> about creating servers in a group in separate cells,
17:24:51 <mriedem> maybe for anti-affinity,
17:24:58 <mriedem> and then needing to iterate the cells to determine membership quota?
17:25:15 <dansmith> well, like anything else we have to traverse cells for I would think
17:25:45 <melwitt> yeah, another way I thought of is if you created an instance in group A and then the cell it's in no longer has capacity, the second instance you boot into group A could go in another cell. point being, I just have to make sure the count for that goes across cells
17:26:03 <dansmith> yeah
17:26:23 <mriedem> my side thought on this, was it'd be good to know if we have any testing for this,
17:26:28 <melwitt> I was getting confused yesterday bc I forgot server groups are in the API DB, so I was thinking "how can I get all the groups" but that's not an issue bc API DB. so disaster averted
17:26:32 <mriedem> i think we could contain a test for this within nova functional
17:26:36 <mriedem> stubbing out multiple cells
17:26:51 <dansmith> melwitt: aye
17:27:30 <melwitt> mriedem: that's probably possible, though I dread doing it
17:27:31 <mriedem> i'm assuming we have some kind of filter we could use for an internal test to say a host is all used up once a single instance is on it
17:27:50 <mriedem> so create two cells, one host per cell, and a group with 2 instances
17:27:54 <mriedem> but yeah, easier said than done
17:28:51 <melwitt> hopefully it would be easy :P
17:28:57 <dansmith> okay well, sounds like that's sorted, aside from tests
17:29:12 <dansmith> anything else?
17:29:33 <mriedem> nope
17:29:44 <melwitt> nay
17:29:46 * dansmith waits for the nope to be seconded
17:29:47 <dansmith> woot
17:29:50 <dansmith> #endmeeting