17:00:11 #startmeeting nova_cells 17:00:12 Meeting started Wed Feb 3 17:00:11 2016 UTC and is due to finish in 60 minutes. The chair is alaski. Information about MeetBot at http://wiki.debian.org/MeetBot. 17:00:13 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 17:00:16 The meeting name has been set to 'nova_cells' 17:00:26 o/ 17:00:30 o/ 17:00:48 o/ 17:00:49 o/ 17:01:04 cool, good to see everyone 17:01:09 #topic v1 testing 17:01:38 looking at http://goo.gl/32yUGy the cells failures seem to mostly track normal failure rates 17:01:50 melwitt: anything you're aware of? 17:02:09 oj <- head with headset on indicating multitasking on a call 17:02:10 alaski: no 17:02:26 dansmith: heh, nice 17:02:33 melwitt: great 17:02:54 it's been really nice that things have been stable for a while 17:03:05 #topic Open Reviews 17:03:19 as always https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking is the place to go for reviews 17:03:38 it looks like some new patches have been added which is great 17:04:12 and now that we're past non priority FF getting reviews on cells patches and moving them forward is going to be important 17:04:25 we have about a month left in M I think 17:05:21 yes, end of February 17:05:37 I'll try to go through all of the open reviews this week, if anyone can help review please do so 17:05:40 mlavalle: thanks 17:05:52 #topic Open Discussion 17:06:07 rlrossit can't be here right now. (Has a meeting) 17:06:27 But unless there is objection he is going to start looking at message broker switching issues. 17:06:32 http://docs.openstack.org/releases/mitaka/schedule.html 17:06:34 In preparation for a N spec. 17:06:54 *AN N spec. 17:06:59 oh, I was about to say I've been working on a WIP up for that at https://review.openstack.org/#/c/274955/ 17:07:07 melwitt: :) 17:07:14 I will point rlrossit to it. 17:08:05 that's great 17:09:10 I have about 2 more patches to put in this week. 1 for devstack cell0 and another working on alaski's wip for cell0 error state handling. 17:10:12 doffm: how do you want to handle taking over my patch? I can abandon and you can own it, or you can just steal the change-id if you'd like 17:10:29 alaski: I'll just take the change ID. 17:11:01 okay 17:12:02 speaking of N specs it's a good time to start thinking of those, as well as summit discussions 17:12:44 Yeah, I guess melwitt and/or rlrossit will do message broker spec. 17:13:07 I'd be grateful for ideas for other topics to work on a spec for. 17:13:37 * bauzas waves super late 17:13:45 I haven't thought that far ahead yet 17:13:51 Ok. :) 17:14:13 but assuming everything planned in M gets done there may be some work to add instance_mapping lookups to all api calls 17:14:56 that'll be a requirement for looking at multiple cells 17:15:21 we should also start looking at what else can be migrated out of the cell db into the api db 17:15:51 doffm: that's a good research area if you're interested 17:16:06 alaski: Thanks, I will take a look. 17:16:13 I have a question related with cellsV1 since we will still use it for awhile 17:16:14 speaking of that, I think there's at least a couple places where there are FK constraints between tables that will be split up and we'll need to deal with that 17:16:33 yeah 17:16:38 melwitt: Do you know off-the-top-of your head what they are? 17:16:57 we should probably review the cellsv2 etherpad about that 17:17:11 melwitt: that's annoying, but expected I supposed 17:17:15 *suppose 17:17:23 * bauzas remembers how it was difficult to cut the FK for compute_nodes and services 17:17:29 doffm: one is security groups I think 17:17:37 belmoreira: one sec 17:18:27 Ok, well lets go look them up and discuss on etherpad? Next weeks meeting? 17:18:44 doffm: then I think I found something with fixed_ips 17:19:36 doffm: that would be a great topic for the meeting 17:20:02 alaski: Sounds good. 17:20:25 I remember some discussion about that 17:20:48 mostly all the scheduler related tables like aggregates, services and so on 17:21:15 if we decide to have one scheduler for all the cells 17:21:25 yeah, we should start to detangle all of that 17:21:46 for now we'll have one scheduler, but it's still open as to whether or not we'll stop there 17:22:14 well 17:22:16 One Highly available Horizontal scaling scheduler. ;) 17:22:41 easy 17:22:46 anyway 17:23:31 I think where we last landed was that it'll be possible for deployers to have a scheduler per cell, but it'll be hidden behind a global call to the scheduler api 17:23:50 as I know rackspace and CERN feel confident that they need that 17:24:10 I can add IBM to that list also. (Or something like it) 17:24:11 we need to think about that 17:24:38 alaski: yes, I have some reservations about the all in one scheduler 17:24:50 I just want to make sure we won't have two different schedulers like cells v1 17:25:04 because it doesn't work 17:25:04 bauzas: agreed 17:25:18 well, it works 17:25:27 but it's unnecessary I think 17:25:27 it works, I agree 17:25:36 you could accomplish the same thing with a single call 17:25:44 but it means that we have some different services for two identical calls 17:25:50 right 17:26:02 I'm not that attached having one single call 17:26:27 2 calls for 2 scheduler instances is perhaps okay, but that needs to be a deployer's decision using the same service 17:26:31 that's just my point 17:26:36 some people have expressed strong opinions about having a single call 17:26:50 which is fine too 17:27:03 so I would like to explore that route first, and if there's a good reason for two calls we can go down that route 17:27:17 100% agreed 17:27:20 This is probably a topic for the design summit right? Like a big topic. 17:27:37 when saying I'm not that attached, I mean I would honestly prefer one single call 17:27:41 doffm: that's a bit of an understatement :) 17:28:03 doffm: well, not sure we could conclude by a 40min session :) 17:28:05 heh 17:28:32 but yes, it's something we need to discuss there 17:28:41 I'd rather prefer to see some concensus here before the summit, so we could then go back to the guys 17:28:47 my thinking has been that it may be early for it though 17:28:56 it could 17:29:04 it will actually depend on the table split 17:29:16 that's my guess 17:29:17 I was hoping to stick with a single scheduler for now, and then discuss it further when we're working on multiple cell support 17:29:21 ++++ 17:29:29 in the past we started collecting feedback in this etherpad: https://etherpad.openstack.org/p/nova-cells-scheduling-requirements 17:29:35 bauzas: Lets dig in to the table split first then. 17:29:41 yup 17:30:06 belmoreira: excellent 17:30:10 #link https://etherpad.openstack.org/p/nova-cells-scheduling-requirements 17:30:27 heh, I apparently contributed to that 17:30:45 belmoreira: you had a question earlier? 17:31:02 yes, is related with cellsV1 17:31:21 we have some bugs with the tag "cells" but very few are in progress... 17:31:54 since we will still using cellsV1 should we start mention them during this meeting and see what should be fixed until cellsV2 can be used? 17:33:29 my feeling is that if it has the "cells" tag is waiting for alaski or melwitt (for example) to have have a look 17:33:47 I'm going to say yes, with the warning that it's been decided not to spend time on cells v1 except for some exceptional things 17:34:09 so we'll need to bring up these changes in the Nova meeting 17:34:26 I do pay attention to new cells bugs that come through the queue, I thought last I checked all the recent ones were assigned to someone so I need to go look at them again 17:34:42 I have been remiss there 17:36:12 belmoreira: it would be good to raise awareness of bugs that are affecting you. we just have to be careful of spending too much time on v1 versus v2 17:36:14 fair enough... they should be at least triaged and then we can prioritise 17:36:27 agreed 17:37:28 alaski: ok, thanks 17:37:33 I'll make a note to go through the buq queue as well 17:37:50 anything else for today? 17:37:55 belmoreira: I recently saw some blogpost about your Kilo migration 17:38:00 are those bugs related to it ? 17:38:28 (because for nova, it was mostly said that was because of $cells) 17:38:53 (speaking of http://openstack-in-production.blogspot.fr/2015/11/our-cloud-in-kilo.html = 17:39:15 bauzas: yes 17:39:19 ack 17:39:53 that would certainly require some discussion before moving further, but okay I see which ones you're talking about :) 17:40:46 yeah. it's good to have a list though 17:40:57 anything else? 17:41:06 in the blog post I think is only one related with cells 17:41:30 But we have more... :) 17:41:37 :) 17:42:33 looks like that's it today 17:42:47 belmoreira: definitely get them reported and we'll work out what to do with them 17:42:54 thanks all! 17:42:56 Thanks. 17:43:00 #endmeeting