15:00:06 #startmeeting scheduler 15:00:07 Meeting started Tue Dec 17 15:00:06 2013 UTC and is due to finish in 60 minutes. The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:08 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:10 The meeting name has been set to 'scheduler' 15:00:19 anyone here for the scheduler meeting? 15:02:59 hi 15:03:08 hello 15:03:40 I was beginning to think I had the wrong time or something :-) 15:05:00 o/ 15:05:29 hmm, looks like the mirantis guys aren't here so can't talk about the nodb scheduler status 15:05:47 In that case 15:05:56 #topic scheduler code forklift 15:06:16 I can definitely report that we are making good progress on separating out the scheduler code... 15:06:36 we have created two new repos, one for the scheduler and one for the client APIs... 15:06:47 you can get to these repos at... 15:07:03 #link https://github.com/openstack/gantt 15:07:30 #link https://github.com/openstack/python-ganttclient 15:08:09 They are not quite ready for prime time yet, I'm trying to do a simple patch (update the readme file) and we're having issues with Jenkens... 15:08:33 once we get that resolved the repos should be ready for normal updates & reviews 15:09:22 i still have issues with the client - we have yet to even talk about interfaces. 15:09:38 is the client here just a interface for the rpc? 15:10:09 that's all that's there so far but I belive it should be extended to a restfull interface 15:10:33 ok, thanks for the clarification 15:10:54 can you articulate your issues with the client, are the architectural or implementation specific? 15:11:05 s/are the/are they 15:11:06 just one concern about the developments in 2 different trees - will the imports be the diffs? 15:11:47 my thought was, until we move to the new trees, the primary development is in nova and I can take on the task of mirroring changes to the new trees. 15:11:51 n0ano: i am not sure why we need a client defined at the moment. the reason for that is there is no actual API defined. 15:11:55 o/ 15:12:22 hopefully, there won't be too many changes before we can move to the new trees 15:12:36 ok. thanks for doing this. 15:13:23 garyk I think that's a reasonable concern, we should discuss the actual API and define it, the real question will be who wants to take on the task of doing that (design by committee not one of my favorite modes) 15:14:38 i think that the ideal place for that would be the next summit. a few people in a room… only get out for beer when there is an API defined 15:14:51 Do we have enough details to try to define the api yet? 15:14:53 update on memcached scheduler: devstack was broken last week, so still can't make end-to-end tests. Hopefully this will be completed on this week 15:15:14 alaski: no, not yet 15:15:21 alaski I don't so, not yet 15:15:34 all we have decided on is the forklift. which is just moving code from a to b 15:15:41 In what way was DevStack broken? How would I know about this? (Sorry for the newbie questions) 15:15:54 I agree with garyk, a session in Atlanta would be appropriate but we still need one person to drive 15:16:02 garyk: thanks, just confirming 15:16:15 MikeSpreitzer: www.devstack.org - this is used for the gating and helping one spin up a setup with openstack 15:16:25 it is used for testing and development 15:16:42 I know that much. My question is about how I know about it being broken. 15:17:00 kvm crushes on attempt to start new VM 15:17:00 #topic memcached base scheduler 15:17:09 Is "devstack is broken" the same as "problem in the gate"? 15:17:20 MikeSpreitzer, +1 15:17:21 MikeSpreitzer: over the last few weeks the gating has been broken every now and then. Last week it was due to some issues with Neutron 15:17:41 MikeSpreitzer: no, DevStack doesn't work locally 15:18:02 it is not a problem on a gate 15:18:32 so the problem was your local development environment was broken - ick 15:18:43 correct 15:19:16 but sounds like there are gate issues also, just to make life interesting 15:19:58 hnarkaytis, will the holidays slow you down? 15:20:16 we are in Russia, so we will have holidays in Jan 15:20:29 we will work till Dec 31st 15:20:41 cool, you work while we play :-) 15:20:54 I expect that end-to-end tests will be completed on this week 15:21:18 hnarkaytis, with patches up for review shortly afterwards? 15:21:30 crushing KVM is the only problem. We spent Friday and Monday on this 15:21:46 sounds violent 15:22:35 I will alert all reviewers via gerrit 15:23:00 hnarkaytis, tnx for the effort, looking forward to the patches, any other questions on this subject? 15:23:49 in that case, garyk did you want to talk about instance groups (I've kind of been ignoring you) 15:25:17 n0ano: sure. just wanted to give a quick update 15:25:29 #topic instance groups 15:25:33 garyk, you have the floor 15:25:48 1. we have a scheduling patch in review https://review.openstack.org/#/c/33956/ 15:26:14 2. we are currently working on the V2 and V3 API's - the code needs a little rafactor and hopefully we'll post it sooner than later 15:26:36 3. we have the client V2 support, need to add the V3 support 15:27:00 thats about it at the moment. We are progressing slowly at the moment but hope to have it all done by I2 15:27:07 (great, first set of patches I need to mirror to the new tree :-) 15:27:08 if not we should be tarred and featherd :) 15:27:57 regarding the new tree - maybe we should have a script that notifies if one of the files in the scheduling directory or rpc interfaces are updated…. 15:28:45 garyk, that'd be great, otherwise I was just planning on monitoring things carefully, fortunately it's a relatively know set of files to monitor 15:29:19 n0ano: i'll try and script something for that in the coming days 15:29:44 garyk, cool, let me know, we should be able to work it out 15:30:00 ok, will do. 15:30:19 #topic administrivia 15:30:21 i guess that you can go to the next topic - nothing else to update regarding the instance groups 15:30:29 :) 15:31:17 I'm going to be out the next two week (major holidays here), unless someone else wants to chair I propose we just cancel the next two meetings and start up again on 1/7/14 15:31:44 work can still progress (email is a wonderful thing) 15:32:05 +1 15:32:08 +1 15:32:15 +1 15:32:49 I'll take that as unanimous consent :-) 15:32:54 #topic opens 15:32:58 +1 15:33:04 Anyone have anything new for today? 15:33:35 wishing you all a merry xmas and happy new year (our side of the world it is business as usual till april :() 15:33:57 +1 15:34:59 OK, I'm hearing silence, tnx everyone and we'll meet here again next year 15:35:16 #endmeeting