17:00:05 <dansmith> #startmeeting nova-cells
17:00:06 <openstack> Meeting started Wed Jun  6 17:00:05 2018 UTC and is due to finish in 60 minutes.  The chair is dansmith. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:07 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:09 <openstack> The meeting name has been set to 'nova_cells'
17:00:15 <tssurya> o/
17:00:26 <mriedem> o/
17:00:33 <melwitt> o/
17:00:43 <dansmith> #topic bugs
17:00:49 <dansmith> any bugs people want to highlight?
17:01:09 <mriedem> there was the one yesterday...
17:01:26 <mriedem> https://bugs.launchpad.net/nova/+bug/1773945
17:01:27 <openstack> Launchpad bug 1773945 in OpenStack Compute (nova) "nova client servers.list crashes with bad marker" [Undecided,New] - Assigned to Surya Seetharaman (tssurya)
17:01:29 <tssurya> the revert fixes it ?
17:01:41 <mriedem> i left a comment https://bugs.launchpad.net/nova/+bug/1773945/comments/9
17:01:44 <tssurya> or should we also log if there is a NULL cell_mapping ?
17:01:46 <tssurya> ack
17:01:47 <mriedem> the novaclient revert fixes a specific problem in the client
17:02:35 <dansmith> yeah, I think it's legit to log an error if the mapping is null so that the operator can fix it up
17:02:43 <mriedem> and raise 500
17:02:45 <mriedem> ?
17:02:46 <dansmith> yeah
17:02:47 <mriedem> ok
17:02:54 <dansmith> because it's legit internal "we screwed up" error
17:03:07 <dansmith> tssurya: are you going to do that or do you want me to?
17:03:17 <dansmith> happy to do it if you want,
17:03:20 <tssurya> I can to do it
17:03:20 <dansmith> but don't want to steal it away :)
17:03:22 <dansmith> okay
17:03:28 <tssurya> :)
17:03:42 <dansmith> any other bugs?
17:03:51 <tssurya> nope
17:04:00 <tssurya> I mean from my side no
17:04:08 <dansmith> #topic open reviews
17:04:19 <dansmith> so the main one in my head is tssurya's down-cell spec
17:04:29 <dansmith> which is up against the deadline tomorrow, right?
17:04:36 <tssurya> yea,
17:04:41 <tssurya> I am currently updating it
17:04:56 <dansmith> okay cool, ping us when it's up so we can make sure to jump on it
17:04:57 <tssurya> based on mriedem's comments
17:05:02 <tssurya> yep thanks!
17:05:07 * bauzas just sits at the back
17:05:18 <dansmith> I didn't really have anything else to add to mriedem's comments after our discussion yesterday
17:05:24 <tssurya> this one: https://review.openstack.org/#/c/560042/
17:05:27 <melwitt> there's my count quotas with placement spec (which needs an update) but, I thought we can't have that because of the lack of type/owner in placement. is that everyone else's understanding too?
17:05:30 <tssurya> dansmith has already told his opinion
17:05:48 <tssurya> would be nice to get the opinion of others
17:06:16 <tssurya> I agree its computing intensive
17:06:43 <dansmith> melwitt: yeah at this point, I think you're looking at a dependent spec for some placement api changes and can't really see that happening alongside all the stuff we discussed yesterday on the migrator thing
17:06:59 * melwitt nods
17:07:06 <dansmith> tssurya: let's ping jay after this to get him to weigh in on that
17:07:18 <tssurya> dansmith: ack
17:08:05 <dansmith> any other open reviews to highlight?
17:08:40 <dansmith> #topic open discussion
17:08:53 <mriedem> heal_allocations CLI has a +2 from gibi and i just need to get jay to come back on it
17:08:55 <mriedem> https://review.openstack.org/#/c/565886/
17:09:01 <mriedem> sounds like belmiro might already be using it?
17:09:04 <mriedem> tssurya: ^?
17:09:06 <tssurya> yes
17:09:10 <tssurya> :D
17:09:14 <dansmith> heh
17:09:20 <tssurya> it fixed what we wanted to, thanks
17:09:20 <dansmith> cern is already using rocky, they just don't want to call it rocky
17:09:22 <dansmith> from the sound of it
17:09:30 <mriedem> tssurya: i also wanted to find out about https://review.openstack.org/#/c/569247/
17:09:31 <tssurya> dansmith: well said :)
17:09:34 <dansmith> heh
17:09:45 <mriedem> that was an attempt to optimize the scheduler pulling all instances from all hosts every scheduling request
17:09:47 <melwitt> I was wondering, have we got any info about the nova-net => neutron migration and whether or not it's okay for us to remove some nova-net REST APIs this cycle? or if we should defer those to stein?
17:09:51 <mriedem> which you guys said was slower now
17:10:13 <dansmith> melwitt: I thought they said in yvr that they wanted us to wait?
17:10:18 <tssurya> mriedem: we haven't had time to look at this recently, I will comment on the spec once we apply this
17:10:25 <dansmith> melwitt: but that we could do the api bits because they don't use those
17:10:25 <mriedem> tssurya: i posted some test results in that patch and from my devstack env it didn't seem to make a difference, but i could maybe just need more computes or fake instances to hit the right scale to see a difference
17:10:39 <melwitt> dansmith: tssurya was going to double-check from what I remember last week?
17:10:53 <tssurya> melwitt: for us after I asked belmiro he said he is ok to remove nova-net REST APIs
17:10:55 <mriedem> yeah i'm also wondering about the nova-net specific APIs
17:10:55 <melwitt> about whether the api bits are fair game and won't hurt the safety net
17:11:04 <melwitt> tssurya: okay, cool. thanks
17:11:09 <mriedem> tssurya: has anyone at cern done an actual audit of the impacted APIs?
17:11:14 <dansmith> we asked belmiro about apis point-blank in yvr and he said he was cool with it
17:11:17 <mriedem> because it's not all os-fping
17:11:36 <tssurya> mriedem: I pointed the etherpad to him, he said he will comment on each of it
17:11:39 <tssurya> I will remidn him again
17:11:40 <mriedem> https://etherpad.openstack.org/p/nova-network-removal-rocky
17:11:43 <mriedem> ah ok
17:11:47 <mriedem> that's what i'm looking for, thanks
17:11:48 <tssurya> remind*
17:11:58 <mriedem> like, os-networks worries me
17:12:22 <mriedem> the various os-floating-ips ones don't since cells v1 didn't support floating ips
17:12:26 <mriedem> or security groups
17:12:33 <dansmith> mriedem: none of those apis are enabled for multiple cells, and since they're working on v2, I can't imagine any of them get touched at all
17:12:42 <mriedem> ok
17:12:56 <mriedem> this meeting is logged so i can say i warned everyone
17:12:58 <dansmith> like os-networks/add . can't possibly do the right thing
17:13:33 <dansmith> it won't add it in the right db, nor talk to the right nova-network daemon, etc
17:13:36 <mriedem> melwitt: so you're going to drop the -W from my os-fping patch?
17:13:38 <tssurya> speaking of meeting being logged, I know I brought up nova service-list --cell yesterday, but my team wanted to know if there was an easy CLI way of knowing which cell the instance is in
17:13:44 <melwitt> that's a good point. would still be reassuring the get belmiro's ack on that etherpad if he can spare any time
17:13:45 <melwitt> mriedem: yes
17:14:05 <tssurya> melwitt: he is not here this week, I will ensure he gives his nod on MOnday
17:14:07 <tssurya> Monday*
17:14:19 <mriedem> does list_cells list instances too with an option?
17:14:32 <dansmith> verify_instance
17:14:44 <dansmith> will show the name and uuid of the cell
17:14:47 <dansmith> that the instance is in
17:15:11 <tssurya> dansmith: oh totally forgot we had that command
17:15:31 <melwitt> thanks tssurya
17:16:06 <dansmith> okay, anything else?
17:16:40 <tssurya> not from me,
17:16:47 <mriedem> nope
17:16:57 <tssurya> except thanks for the summit time praises
17:17:09 <dansmith> oh jeez,
17:17:17 <dansmith> mriedem was just fawning all over you to belmiro
17:17:21 <dansmith> it was embarrassing
17:17:25 <tssurya> I heard you did too ;)
17:17:28 <dansmith> oh
17:17:29 <melwitt> haha
17:17:30 <dansmith> maybe :)
17:17:31 <dansmith> haha
17:17:31 <tssurya> and so did melwitt
17:17:34 <tssurya> hehe
17:17:41 <tssurya> thanks a lot!
17:18:14 <dansmith> okay, mushy stuff aside, sounds like we're done...
17:18:44 <dansmith> #endmeeting