15:00:28 #startmeeting gantt 15:00:29 Meeting started Tue Feb 3 15:00:28 2015 UTC and is due to finish in 60 minutes. The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:30 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:32 The meeting name has been set to 'gantt' 15:00:40 anyone here to talk about the scheduler? 15:00:43 o/ 15:00:51 o/ 15:01:12 \o 15:02:05 OK, let' 15:02:14 OK, let's start 15:02:22 * n0ano ' and return are too close 15:02:31 o/ 15:02:36 n0ano: or your fingers are too fat 15:02:37 * bauzas like a and tab for a french kb 15:03:03 edleafe, I never thought of that :-) 15:03:11 #topic remove DB access spec 15:03:39 unfortunately, not a lot of follow-up after the midcycle 15:03:41 edleafe, I see activity on this, I think most of the outstanding comments are kind of implementation details, how do you feel about it 15:03:53 edleafe: I left a review, sorry for the short notice 15:04:22 edleafe: I had no big issues but I think you need to address 2 points 15:04:27 I'm wondering if this is going to be possible at all 15:04:54 I just comment on some more small thing, I didn't have any big concern 15:04:54 If the spec isn't approved yet, I don't see how we're going to get all the changes that would be needed in by the FF 15:05:22 edleafe: agreed 15:05:29 edleafe, I haven't given up yet and we can provide help on the changes 15:05:30 edleafe: that doesn't mean you can't land a patch 15:05:39 alex_xu_: yes, I responded to your comments 15:05:46 edleafe: because most of the stuff has been agreed 15:06:30 what bauzas said 15:06:47 bauzas: I can probably land a patch or two, but I see several patches needed to get these changes in 15:06:57 edleafe: my comments are implementation details that would have come up in a review y'know :) 15:07:08 Just changing the compute node stuff to be versioned will be fun 15:07:17 edleafe: I don't think so 15:07:23 bauzas, hence my comment on implementation details 15:07:24 edleafe: I mean for adding a new col 15:07:47 n0ano: so we're in violent agreement eh ? 15:07:54 bauzas, yet again 15:07:54 bauzas: you don't think that there will be discussion over how to best do this? 15:08:07 it is a very new thing to be adding versioning to the database 15:08:26 edleafe: keep it simple* 15:08:35 edleafe: just add a col and that's it 15:08:54 edleafe: that's just a migration script of 4 lines to write 15:09:09 2 for upgrading and 2 for downgrading 15:09:16 bauzas: I know how to do it; it's answering the questions that will come up over *why* 15:09:27 adding the column is easy as long as the usage of that column is simple 15:09:38 edleafe: eh, that's in the spec 15:09:55 edleafe, I think we thrashed that out at the meetup so it shouldn't come as a surprise to anyone 15:10:08 edleafe: so prepare your patch, do your series and once we're good to go with the spec, we can fire your patches 15:10:09 n0ano: I hope your 15:10:14 ugh 15:10:18 I hope you're right 15:10:25 * n0ano refuses to comment on fat fingers 15:10:41 n0ano: Oh, I know just how fat my fingers are 15:11:07 * bauzas sizing his fingersz 15:11:42 I think our way forward is fairly simple... 15:11:50 agreed 15:11:55 1. edleafe to address the minor issues with the spec 15:12:04 2. work on the patches 15:12:17 3. grab coffee and fix CI 15:12:26 3. update the DB with the new version column (address any concerns that come up) 15:12:30 ...profit!! 15:12:32 4. declare success 15:13:06 5. iterate over the previous 4 15:13:14 bauzas, +1 15:13:29 I think we're all in agreement here, let's move on 15:13:43 #topic detach service from computenode 15:13:52 bauzas, this is yours 15:13:54 so I was greedy 15:14:04 I already discussed that with jaypipes 15:14:20 but let me explain again here 15:14:23 heh, I already know the story 15:14:33 me too 15:14:42 so, virt drivers report a list of nodes 15:14:58 alex_xu_, edleafe keep bauzas honest 15:15:12 Nova is taking this list of nodes per service and provides a ComputeNode resource for each 15:15:29 n0ano: yes, sir 15:15:45 the problem is when the virt driver is reporting something non-local, like a cluster 15:16:15 then you could have duplicate records for the same resources if 2 or more drivers would report the same list 15:16:44 so jaypipes gave his feeling that it's not supported, period. 15:16:54 and asked me to do my homework 15:17:00 bauzas: lol 15:17:29 what's not supported, a cluster reporting duplicate nodes or reporting a cluster at all? 15:17:33 jaypipes: technically, I'm not at home now :) 15:17:36 * alex_xu_ is thinking about english question, period means 1 release cycle, or 2...or 3? 15:17:50 alex_xu_: period means "that's it" 15:17:56 alex_xu_, he means it's definitely nt supported 15:18:01 s/nt/not 15:18:05 alex_xu_: with no plans to add support 15:18:25 n0ano: it's not supported to see more than 1 compute node reporting the same set of resources 15:18:26 oops... n0ano, lxsli, thanks 15:19:03 bauzas, your mean 2 compute nodes reporting the same set of resources, right? 15:19:10 n0ano: indeed 15:19:36 which is fine but is there a way to guarantee this won't happen? 15:20:18 n0ano: that's my point, I just think this thought needs to be further appreciated with regards to what will be a resource in the next future 15:20:50 n0ano: saying that a unique identifier for a compute node is its hypervisor_hostname makes it more understandable IMHO 15:21:17 then, to summarize, there's no issue now and we need to make sure we don't create this issue in the future. 15:21:37 about nova-compute ha, I remeber it is discussed at juno summit https://etherpad.openstack.org/p/juno-nova-clustered-hypervisor-support but don't know what result 15:21:56 jaypipes: can I ask why you didn't want to support that? 15:22:20 n0ano: well, there is a workaround for Ironic 15:22:40 n0ano: so yes that's not technically an issue, but that just conceptually sucks 15:23:32 on one hand, we want to bail out the relationship with services, but on the other hand, we keep this as a tuple element for an unique key 15:23:45 alex_xu_: because a) it's a distraction to our current work and b) it changes the model of how Nova failure zones are structured and how Nova considers the ownership of resources. 15:24:39 jaypipes, a) is most important to me, especially as this doesn't seem lose any significant capability 15:24:39 a) is not a problem with an opensource model 15:25:05 bauzas, a) is an issue for development, no matter what the model 15:25:08 jaypipes: thanks for the answer 15:25:38 failure zones means what? 15:25:41 n0ano: the model supposes a constrained number of resources 15:26:28 n0ano: I can't accept we shouldn't at least identify how to fix it because we're considering ourselves as distracted 15:26:33 bauzas, old saying - 9 women can't create a baby in 1 month, unlimited resources doesn't necessarily help 15:27:30 bauzas, I have no problem looking at the issue and thinking of solutions as long as we don't block current work based on that 15:28:07 n0ano: I think I have other things that are more blocked than this one... 15:28:17 n0ano: like the requestspec obj BP... 15:28:29 bauzas, which is why I would have qualified jaypipes `period' with `period for now' 15:29:34 ok, then let's move on 15:29:35 alex_xu_: failure zones == the acceptable surface of failure. in nova-compute's case, it means that a failure of one nova-compute daemon will affect only the ability to change resources on just the local node the nova-compute worker is running on. 15:29:50 everybody is aware of the limitation now, and we consider it as non-blocker 15:30:06 alex_xu_: and since Ironic and the clustered hypervisor managers (HyperV and VMWare) changed the notion of a failure zone from local node to local cluster, that was A Bad Thing. 15:30:14 bauzas, would you quit typing my thoughts faster than me :-) 15:30:39 jaypipes: not hyperv :) 15:30:44 n0ano: I wish I could 15:30:51 jaypipes: VMWare ;) 15:30:58 alexpilotti: hyperv is not a clustered hypervisor? 15:31:05 jaypipes: hell no 15:31:33 jaypipes: thanks again :) 15:31:51 anyway, let's move on 15:31:58 #topic statu on cleanup work 15:32:26 n0ano: so I have a blocker thing on ReqSpec BP 15:32:27 basically, is there any issues with current patches that we want to talk about 15:32:34 bauzas, go for i 15:32:37 s/i/ist 15:32:42 s/ist/it 15:32:47 n0ano: it was raised during midcycle but I heard no clear outcome 15:33:07 n0ano: so basically my whole series got -1 because of the Instance obj being use 15:33:09 used 15:33:17 n0ano: that, I can understand 15:33:27 n0ano: but I still need to provide an Image object 15:33:49 and the problem is about the properties field of that object, which is very versatile 15:34:33 n0ano: https://review.openstack.org/#/c/146913/ 15:35:09 https://review.openstack.org/#/c/76234 is being requested to merge instead 15:35:29 everybody agrees on that ? 15:36:02 my proposal was to write a first bump of the Image object with the unversioned properties field, and bump it to 1.1 with the above patch 15:37:00 I would prefer to get 76234 in before. 15:37:15 jaypipes: just saw the patch series, ok, let's wait for it 15:37:49 bauzas: jaypipes: agreed 15:37:52 bauzas, not having looked at 76234, does it negate your patch or do you just need to use it after it lands 15:38:02 n0ano: I could use it 15:38:42 looks like there's activity on it so, if we all in agreement, let's just wait for it to land and then proceed 15:38:48 n0ano: my only fear is that this patch couldn't be merged before FF 15:39:06 n0ano: in that case, it would postpone my series up to L 15:39:17 n0ano: as it's not a priority patch 15:39:41 bauzas, since the scheduler work is priority and will depend upon this patch we can make a plea in the nova meeting to prioritize this patch 15:40:26 n0ano: makes sense 15:40:31 n0ano: ok, let's move on then 15:40:40 sounds good 15:40:44 #topic opens 15:40:51 anyone have anything new for today? 15:40:53 jaypipes: how's that numatopology please? 15:41:14 lxsli: running tests locally now after rebasing and fixing conflicts. should be up within half an hour. 15:41:24 \o/ hooray \o/ 15:42:18 I'm hearing crickets, I'm about to close this 15:42:47 cool 15:42:51 OK, tnx everyone, we'll meet here again next week 15:42:54 #endmeeting