15:00:06 <n0ano> #startmeeting scheduler 15:00:07 <openstack> Meeting started Tue Dec 17 15:00:06 2013 UTC and is due to finish in 60 minutes. The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:10 <openstack> The meeting name has been set to 'scheduler' 15:00:19 <n0ano> anyone here for the scheduler meeting? 15:02:59 <garyk> hi 15:03:08 <MikeSpreitzer> hello 15:03:40 <n0ano> I was beginning to think I had the wrong time or something :-) 15:05:00 <comstud> o/ 15:05:29 <n0ano> hmm, looks like the mirantis guys aren't here so can't talk about the nodb scheduler status 15:05:47 <n0ano> In that case 15:05:56 <n0ano> #topic scheduler code forklift 15:06:16 <n0ano> I can definitely report that we are making good progress on separating out the scheduler code... 15:06:36 <n0ano> we have created two new repos, one for the scheduler and one for the client APIs... 15:06:47 <n0ano> you can get to these repos at... 15:07:03 <n0ano> #link https://github.com/openstack/gantt 15:07:30 <n0ano> #link https://github.com/openstack/python-ganttclient 15:08:09 <n0ano> They are not quite ready for prime time yet, I'm trying to do a simple patch (update the readme file) and we're having issues with Jenkens... 15:08:33 <n0ano> once we get that resolved the repos should be ready for normal updates & reviews 15:09:22 <garyk> i still have issues with the client - we have yet to even talk about interfaces. 15:09:38 <garyk> is the client here just a interface for the rpc? 15:10:09 <n0ano> that's all that's there so far but I belive it should be extended to a restfull interface 15:10:33 <garyk> ok, thanks for the clarification 15:10:54 <n0ano> can you articulate your issues with the client, are the architectural or implementation specific? 15:11:05 <n0ano> s/are the/are they 15:11:06 <garyk> just one concern about the developments in 2 different trees - will the imports be the diffs? 15:11:47 <n0ano> my thought was, until we move to the new trees, the primary development is in nova and I can take on the task of mirroring changes to the new trees. 15:11:51 <garyk> n0ano: i am not sure why we need a client defined at the moment. the reason for that is there is no actual API defined. 15:11:55 <alaski> o/ 15:12:22 <n0ano> hopefully, there won't be too many changes before we can move to the new trees 15:12:36 <garyk> ok. thanks for doing this. 15:13:23 <n0ano> garyk I think that's a reasonable concern, we should discuss the actual API and define it, the real question will be who wants to take on the task of doing that (design by committee not one of my favorite modes) 15:14:38 <garyk> i think that the ideal place for that would be the next summit. a few people in a room… only get out for beer when there is an API defined 15:14:51 <alaski> Do we have enough details to try to define the api yet? 15:14:53 <hnarkaytis> update on memcached scheduler: devstack was broken last week, so still can't make end-to-end tests. Hopefully this will be completed on this week 15:15:14 <garyk> alaski: no, not yet 15:15:21 <n0ano> alaski I don't so, not yet 15:15:34 <garyk> all we have decided on is the forklift. which is just moving code from a to b 15:15:41 <MikeSpreitzer> In what way was DevStack broken? How would I know about this? (Sorry for the newbie questions) 15:15:54 <n0ano> I agree with garyk, a session in Atlanta would be appropriate but we still need one person to drive 15:16:02 <alaski> garyk: thanks, just confirming 15:16:15 <garyk> MikeSpreitzer: www.devstack.org - this is used for the gating and helping one spin up a setup with openstack 15:16:25 <garyk> it is used for testing and development 15:16:42 <MikeSpreitzer> I know that much. My question is about how I know about it being broken. 15:17:00 <hnarkaytis> kvm crushes on attempt to start new VM 15:17:00 <n0ano> #topic memcached base scheduler 15:17:09 <MikeSpreitzer> Is "devstack is broken" the same as "problem in the gate"? 15:17:20 <n0ano> MikeSpreitzer, +1 15:17:21 <garyk> MikeSpreitzer: over the last few weeks the gating has been broken every now and then. Last week it was due to some issues with Neutron 15:17:41 <hnarkaytis> MikeSpreitzer: no, DevStack doesn't work locally 15:18:02 <hnarkaytis> it is not a problem on a gate 15:18:32 <n0ano> so the problem was your local development environment was broken - ick 15:18:43 <hnarkaytis> correct 15:19:16 <n0ano> but sounds like there are gate issues also, just to make life interesting 15:19:58 <n0ano> hnarkaytis, will the holidays slow you down? 15:20:16 <hnarkaytis> we are in Russia, so we will have holidays in Jan 15:20:29 <hnarkaytis> we will work till Dec 31st 15:20:41 <n0ano> cool, you work while we play :-) 15:20:54 <hnarkaytis> I expect that end-to-end tests will be completed on this week 15:21:18 <n0ano> hnarkaytis, with patches up for review shortly afterwards? 15:21:30 <hnarkaytis> crushing KVM is the only problem. We spent Friday and Monday on this 15:21:46 <MikeSpreitzer> sounds violent 15:22:35 <hnarkaytis> I will alert all reviewers via gerrit 15:23:00 <n0ano> hnarkaytis, tnx for the effort, looking forward to the patches, any other questions on this subject? 15:23:49 <n0ano> in that case, garyk did you want to talk about instance groups (I've kind of been ignoring you) 15:25:17 <garyk> n0ano: sure. just wanted to give a quick update 15:25:29 <n0ano> #topic instance groups 15:25:33 <n0ano> garyk, you have the floor 15:25:48 <garyk> 1. we have a scheduling patch in review https://review.openstack.org/#/c/33956/ 15:26:14 <garyk> 2. we are currently working on the V2 and V3 API's - the code needs a little rafactor and hopefully we'll post it sooner than later 15:26:36 <garyk> 3. we have the client V2 support, need to add the V3 support 15:27:00 <garyk> thats about it at the moment. We are progressing slowly at the moment but hope to have it all done by I2 15:27:07 <n0ano> (great, first set of patches I need to mirror to the new tree :-) 15:27:08 <garyk> if not we should be tarred and featherd :) 15:27:57 <garyk> regarding the new tree - maybe we should have a script that notifies if one of the files in the scheduling directory or rpc interfaces are updated…. 15:28:45 <n0ano> garyk, that'd be great, otherwise I was just planning on monitoring things carefully, fortunately it's a relatively know set of files to monitor 15:29:19 <garyk> n0ano: i'll try and script something for that in the coming days 15:29:44 <n0ano> garyk, cool, let me know, we should be able to work it out 15:30:00 <garyk> ok, will do. 15:30:19 <n0ano> #topic administrivia 15:30:21 <garyk> i guess that you can go to the next topic - nothing else to update regarding the instance groups 15:30:29 <garyk> :) 15:31:17 <n0ano> I'm going to be out the next two week (major holidays here), unless someone else wants to chair I propose we just cancel the next two meetings and start up again on 1/7/14 15:31:44 <n0ano> work can still progress (email is a wonderful thing) 15:32:05 <alaski> +1 15:32:08 <jgallard> +1 15:32:15 <toan-tran> +1 15:32:49 <n0ano> I'll take that as unanimous consent :-) 15:32:54 <n0ano> #topic opens 15:32:58 <garyk> +1 15:33:04 <n0ano> Anyone have anything new for today? 15:33:35 <garyk> wishing you all a merry xmas and happy new year (our side of the world it is business as usual till april :() 15:33:57 <n0ano> +1 15:34:59 <n0ano> OK, I'm hearing silence, tnx everyone and we'll meet here again next year 15:35:16 <n0ano> #endmeeting