17:00:11 <alaski> #startmeeting nova_cells
17:00:12 <openstack> Meeting started Wed Feb  3 17:00:11 2016 UTC and is due to finish in 60 minutes.  The chair is alaski. Information about MeetBot at http://wiki.debian.org/MeetBot.
17:00:13 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
17:00:16 <openstack> The meeting name has been set to 'nova_cells'
17:00:26 <mlavalle> o/
17:00:30 <doffm> o/
17:00:48 <melwitt> o/
17:00:49 <belmoreira> o/
17:01:04 <alaski> cool, good to see everyone
17:01:09 <alaski> #topic v1 testing
17:01:38 <alaski> looking at http://goo.gl/32yUGy the cells failures seem to mostly track normal failure rates
17:01:50 <alaski> melwitt: anything you're aware of?
17:02:09 <dansmith> oj <- head with headset on indicating multitasking on a call
17:02:10 <melwitt> alaski: no
17:02:26 <alaski> dansmith: heh, nice
17:02:33 <alaski> melwitt: great
17:02:54 <alaski> it's been really nice that things have been stable for a while
17:03:05 <alaski> #topic Open Reviews
17:03:19 <alaski> as always https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking is the place to go for reviews
17:03:38 <alaski> it looks like some new patches have been added which is great
17:04:12 <alaski> and now that we're past non priority FF getting reviews on cells patches and moving them forward is going to be important
17:04:25 <alaski> we have about a month left in M I think
17:05:21 <mlavalle> yes, end of February
17:05:37 <alaski> I'll try to go through all of the open reviews this week, if anyone can help review please do so
17:05:40 <alaski> mlavalle: thanks
17:05:52 <alaski> #topic Open Discussion
17:06:07 <doffm> rlrossit can't be here right now. (Has a meeting)
17:06:27 <doffm> But unless there is objection he is going to start looking at message broker switching issues.
17:06:32 <mlavalle> http://docs.openstack.org/releases/mitaka/schedule.html
17:06:34 <doffm> In preparation for a N spec.
17:06:54 <doffm> *AN N spec.
17:06:59 <melwitt> oh, I was about to say I've been working on a WIP up for that at https://review.openstack.org/#/c/274955/
17:07:07 <doffm> melwitt: :)
17:07:14 <doffm> I will point rlrossit to it.
17:08:05 <alaski> that's great
17:09:10 <doffm> I have about 2 more patches to put in this week. 1 for devstack cell0 and another working on alaski's wip for cell0 error state handling.
17:10:12 <alaski> doffm: how do you want to handle taking over my patch?  I can abandon and you can own it, or you can just steal the change-id if you'd like
17:10:29 <doffm> alaski: I'll just take the change ID.
17:11:01 <alaski> okay
17:12:02 <alaski> speaking of N specs it's a good time to start thinking of those, as well as summit discussions
17:12:44 <doffm> Yeah, I guess melwitt and/or rlrossit will do message broker spec.
17:13:07 <doffm> I'd be grateful for ideas for other topics to work on a spec for.
17:13:37 * bauzas waves super late
17:13:45 <alaski> I haven't thought that far ahead yet
17:13:51 <doffm> Ok. :)
17:14:13 <alaski> but assuming everything planned in M gets done there may be some work to add instance_mapping lookups to all api calls
17:14:56 <alaski> that'll be a requirement for looking at multiple cells
17:15:21 <alaski> we should also start looking at what else can be migrated out of the cell db into the api db
17:15:51 <alaski> doffm: that's a good research area if you're interested
17:16:06 <doffm> alaski: Thanks, I will take a look.
17:16:13 <belmoreira> I have a question related with cellsV1 since we will still use it for awhile
17:16:14 <melwitt> speaking of that, I think there's at least a couple places where there are FK constraints between tables that will be split up and we'll need to deal with that
17:16:33 <bauzas> yeah
17:16:38 <doffm> melwitt: Do you know off-the-top-of your head what they are?
17:16:57 <bauzas> we should probably review the cellsv2 etherpad about that
17:17:11 <alaski> melwitt: that's annoying, but expected I supposed
17:17:15 <alaski> *suppose
17:17:23 * bauzas remembers how it was difficult to cut the FK for compute_nodes and services
17:17:29 <melwitt> doffm: one is security groups I think
17:17:37 <alaski> belmoreira: one sec
17:18:27 <doffm> Ok, well lets go look them up and discuss on etherpad? Next weeks meeting?
17:18:44 <melwitt> doffm: then I think I found something with fixed_ips
17:19:36 <alaski> doffm: that would be a great topic for the meeting
17:20:02 <doffm> alaski: Sounds good.
17:20:25 <bauzas> I remember some discussion about that
17:20:48 <bauzas> mostly all the scheduler related tables like aggregates, services and so on
17:21:15 <bauzas> if we decide to have one scheduler for all the cells
17:21:25 <alaski> yeah, we should start to detangle all of that
17:21:46 <alaski> for now we'll have one scheduler, but it's still open as to whether or not we'll stop there
17:22:14 <bauzas> well
17:22:16 <doffm> One Highly available Horizontal scaling scheduler. ;)
17:22:41 <alaski> easy
17:22:46 <bauzas> anyway
17:23:31 <alaski> I think where we last landed was that it'll be possible for deployers to have a scheduler per cell, but it'll be hidden behind a global call to the scheduler api
17:23:50 <alaski> as I know rackspace and CERN feel confident that they need that
17:24:10 <doffm> I can add IBM to that list also. (Or something like it)
17:24:11 <bauzas> we need to think about that
17:24:38 <belmoreira> alaski: yes, I have some reservations about the all in one scheduler
17:24:50 <bauzas> I just want to make sure we won't have two different schedulers like cells v1
17:25:04 <bauzas> because it doesn't work
17:25:04 <alaski> bauzas: agreed
17:25:18 <alaski> well, it works
17:25:27 <alaski> but it's unnecessary I think
17:25:27 <bauzas> it works, I agree
17:25:36 <alaski> you could accomplish the same thing with a single call
17:25:44 <bauzas> but it means that we have some different services for two identical calls
17:25:50 <alaski> right
17:26:02 <bauzas> I'm not that attached having one single call
17:26:27 <bauzas> 2 calls for 2 scheduler instances is perhaps okay, but that needs to be a deployer's decision using the same service
17:26:31 <bauzas> that's just my point
17:26:36 <alaski> some people have expressed strong opinions about having a single call
17:26:50 <bauzas> which is fine too
17:27:03 <alaski> so I would like to explore that route first, and if there's a good reason for two calls we can go down that route
17:27:17 <bauzas> 100% agreed
17:27:20 <doffm> This is probably a topic for the design summit right? Like a big topic.
17:27:37 <bauzas> when saying I'm not that attached, I mean I would honestly prefer one single call
17:27:41 <alaski> doffm: that's a bit of an understatement :)
17:28:03 <bauzas> doffm: well, not sure we could conclude by a 40min session :)
17:28:05 <melwitt> heh
17:28:32 <alaski> but yes, it's something we need to discuss there
17:28:41 <bauzas> I'd rather prefer to see some concensus here before the summit, so we could then go back to the guys
17:28:47 <alaski> my thinking has been that it may be early for it though
17:28:56 <bauzas> it could
17:29:04 <bauzas> it will actually depend on the table split
17:29:16 <bauzas> that's my guess
17:29:17 <alaski> I was hoping to stick with a single scheduler for now, and then discuss it further when we're working on multiple cell support
17:29:21 <bauzas> ++++
17:29:29 <belmoreira> in the past we started collecting feedback in this etherpad: https://etherpad.openstack.org/p/nova-cells-scheduling-requirements
17:29:35 <doffm> bauzas: Lets dig in to the table split first then.
17:29:41 <bauzas> yup
17:30:06 <alaski> belmoreira: excellent
17:30:10 <alaski> #link https://etherpad.openstack.org/p/nova-cells-scheduling-requirements
17:30:27 <alaski> heh, I apparently contributed to that
17:30:45 <alaski> belmoreira: you had a question earlier?
17:31:02 <belmoreira> yes, is related with cellsV1
17:31:21 <belmoreira> we have some bugs with the tag "cells" but very few are in progress...
17:31:54 <belmoreira> since we will still using cellsV1 should we start mention them during this meeting and see what should be fixed until cellsV2 can be used?
17:33:29 <belmoreira> my feeling is that if it has the "cells" tag is waiting for alaski or melwitt (for example) to have have a look
17:33:47 <alaski> I'm going to say yes, with the warning that it's been decided not to spend time on cells v1 except for some exceptional things
17:34:09 <alaski> so we'll need to bring up these changes in the Nova meeting
17:34:26 <melwitt> I do pay attention to new cells bugs that come through the queue, I thought last I checked all the recent ones were assigned to someone so I need to go look at them again
17:34:42 <alaski> I have been remiss there
17:36:12 <alaski> belmoreira: it would be good to raise awareness of bugs that are affecting you.  we just have to be careful of spending too much time on v1 versus v2
17:36:14 <belmoreira> fair enough... they should be at least triaged and then we can prioritise
17:36:27 <alaski> agreed
17:37:28 <belmoreira> alaski: ok, thanks
17:37:33 <alaski> I'll make a note to go through the buq queue as well
17:37:50 <alaski> anything else for today?
17:37:55 <bauzas> belmoreira: I recently saw some blogpost about your Kilo migration
17:38:00 <bauzas> are those bugs related to it ?
17:38:28 <bauzas> (because for nova, it was mostly said that was because of $cells)
17:38:53 <bauzas> (speaking of http://openstack-in-production.blogspot.fr/2015/11/our-cloud-in-kilo.html =
17:39:15 <belmoreira> bauzas: yes
17:39:19 <bauzas> ack
17:39:53 <bauzas> that would certainly require some discussion before moving further, but okay I see which ones you're talking about :)
17:40:46 <alaski> yeah.  it's good to have a list though
17:40:57 <alaski> anything else?
17:41:06 <belmoreira> in the blog post I think is only one related with cells
17:41:30 <belmoreira> But we have more... :)
17:41:37 <bauzas> :)
17:42:33 <alaski> looks like that's it today
17:42:47 <alaski> belmoreira: definitely get them reported and we'll work out what to do with them
17:42:54 <alaski> thanks all!
17:42:56 <doffm> Thanks.
17:43:00 <alaski> #endmeeting