15:00:52 <n0ano> #startmeeting gantt
15:00:53 <openstack> Meeting started Tue Jan 20 15:00:52 2015 UTC and is due to finish in 60 minutes.  The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:54 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:56 <openstack> The meeting name has been set to 'gantt'
15:01:02 <bauzas> \o
15:01:04 <alex_xu> o/
15:01:05 <edleafe> o/
15:01:08 <lxsli> \o
15:01:11 <n0ano> drift off for 30 seconds and people slap you :-)
15:01:25 <n0ano> sorry for the delay, anyway...
15:01:40 <edleafe> n0ano: we'll let it slide this one time...
15:01:41 <bauzas> eh eh
15:01:46 <edleafe> ;-)
15:01:52 <n0ano> #topic Remove direct nova DB/API access by Scheduler Filters - https://review.opernstack.org/138444/
15:02:00 <n0ano> edleafe, but not next time
15:02:27 <n0ano> we talked on email and bauzas tried to explain, are you two still far apart?
15:02:30 <edleafe> ok, so here's the basic issue
15:02:47 <edleafe> there are two ways to get the instance info to the scheduler
15:03:02 <bauzas> edleafe: please explain those 2 things
15:03:13 <edleafe> one is to enhance the current _get_all_host_states() so that each host includes instance info
15:03:28 <bauzas> let's call it option #2
15:03:29 <bauzas> oops
15:03:31 <bauzas> option #1
15:03:35 <edleafe> the other is to have hosts report changes in instance info to scheduler, which would maintain that in-memory
15:03:52 <bauzas> let's call it option #2
15:03:57 <edleafe> bauzas: ok, these are options 1 & 2
15:04:09 <edleafe> option 2 is more disruptive
15:04:20 <edleafe> would require changing code in more places
15:04:31 <bauzas> ok, about #2, you mean that hosts will report all the instances for each 60 secs ?
15:04:31 <edleafe> but would be closer to the eventual model that we want
15:04:41 <edleafe> this was jaypipes's concern
15:04:49 <edleafe> and his main reason for favoring the approach
15:04:52 <alex_xu> edleafe: option2, report changes to scheduler, means write the instance info to compute node table?
15:04:53 <bauzas> dammit, jaypipes is not yet there !
15:05:15 <edleafe> bauzas: they would report when a change occurs
15:05:23 <edleafe> create/destroy/resize
15:05:24 <bauzas> seriously, Jay, if you're reading us, please come in, because we already discussed about this BP last week without you
15:05:34 <bauzas> edleafe: using RPC API call ?
15:05:44 <edleafe> bauzas: yes
15:06:07 <bauzas> edleafe: isn't what I said OK last week ?
15:06:26 <edleafe> in opt#2, the scheduler would get the info when it starts up, and the hosts would update when they change
15:06:38 <bauzas> option #2 is for having pending instances provided to the scheduler
15:07:11 <bauzas> edleafe: right, but do you understand that there is no persistence in the scheduler now ?
15:07:26 <n0ano> edleafe, my only concern with #2 is how to get initial state, does the scheduler have to query every host in the system?
15:07:34 * bauzas would like to be fluent in English...
15:07:49 <n0ano> n0ano, would like to be fluent in French :-)
15:07:55 <edleafe> bauzas: well, isn't that what would happen? If a host gets an instance request (pending), it would report it to the scheduler *before* it starts building it
15:08:06 <bauzas> because it sounds I have some problems explaining why I think you missed something
15:08:10 <edleafe> bauzas: if that fails, then it would report that, too.
15:08:32 <edleafe> bauzas: ok, please explain
15:08:35 <bauzas> edleafe: I said I was OK with that approach for pending instances
15:09:10 <bauzas> edleafe: but as HostState is not persisted in between each request, it needs to call all instances for each host
15:09:49 <bauzas> edleafe: req1 comes in, calls _get_all_host_states(), instantiate the list of HostState
15:10:01 <bauzas> edleafe: req2 comes in, calls _get_all_host_states(), instantiate the list of HostState
15:10:08 <n0ano> bauzas, isn't edleafe basically saying we would change HostState to `be` persistent in the scheduler
15:10:19 <edleafe> n0ano: well, that's the end goal
15:10:22 <bauzas> n0ano: it was not explicitely said
15:10:32 <edleafe> we shouldn't really have to make that call every time
15:10:44 <bauzas> n0ano: but I answered on that approach in my email by saying "+1000 but not now"
15:11:02 <bauzas> well, tbp, I said "+1000 but not *in that spec*"
15:11:03 <edleafe> n0ano: with opt#2, getting there would be half-complete
15:11:20 <edleafe> n0ano: with opt#1, we'd still be where we are now
15:11:33 <bauzas> edleafe: why ?
15:11:37 <alex_xu> I'm understand why persisted will resolve the race problem. Persisted means we need lock for scheduler to read HostState?
15:11:53 <alex_xu> s/I'm understand/I'm not understand...
15:12:07 <bauzas> alex_xu: the main problem is that if we persist HostState, we need to carefully think about all the problems around it
15:12:11 <edleafe> bauzas: because with opt#1, we're still calling _get_all_host_states() with each request
15:12:27 <n0ano> alex_xu, shouldn't need a lock, sheduler owns the persisted HostState, no contention on it
15:12:27 <bauzas> edleafe: right, and what is the scope of this BP ?
15:12:55 <bauzas> edleafe: are you planning to solve all scheduler problems in that spec or are you focusing to not call DB in the filters but rather elsewhere ?.
15:13:14 <edleafe> bauzas: so with opt#2, we still do that, but make a small step to scheduler maintaining info in-memory instead of always calling
15:13:29 <bauzas> edleafe: again, that's not the scope of this spec
15:13:47 <edleafe> bauzas: getting the full host state in memory would be an L project
15:13:57 <bauzas> edleafe: if that spec is requiring a persistent approach, then let's write a spec for persisting HostState and depend your spec on that one
15:14:41 <edleafe> bauzas: that would be fine, but I thought we agreed that that would come in L, not kilo
15:14:52 <bauzas> edleafe: but I'm sorry, opt#2 was not explicitely saying this approach
15:15:03 <bauzas> edleafe: exactly, here is the point
15:15:05 <edleafe> jaypipes thought that this would lay the groundwork
15:15:34 <bauzas> honestly, do we need to meet together all if jay is not here again ?
15:15:44 <edleafe> IOW, if we know we want to end up in one place, why add code that uses the approach we want to leave?
15:15:55 <bauzas> I don't want to spend too much on my time agreeing on something that jay could -1 it
15:16:04 <edleafe> bauzas: yeah, jay not being here sucks
15:16:18 <edleafe> I mean, he's the sub-PTL for scheduler
15:16:26 <bauzas> edleafe: so I ask to postpone that discussion until jay's back
15:16:38 <n0ano> he's not online so we have to try to do what we can
15:16:39 <edleafe> bauzas: ok
15:16:45 <bauzas> because I will have to explain again why I disagree etc.
15:17:08 <edleafe> n0ano: so do wwe need to set up another hangout or irc discussion>
15:17:09 <n0ano> if we postpone what are the odds we'll be done by the spec exception deadline
15:17:34 <edleafe> n0ano: we shouldn't wait until next week, since it'll be the mid-cycle
15:17:35 <n0ano> edleafe, I'd prefer IRC, I have to say the hangout didn't work that well
15:17:40 <bauzas> edleafe: sub-PTL doesn't necessarely mean that a people would be a tech lead
15:17:42 <edleafe> n0ano: agreed
15:17:52 <bauzas> IRC can do the job
15:18:05 <edleafe> bauzas: true, but he should be keeping everything on track
15:18:14 <bauzas> right, hence my ask to postpone
15:18:25 <bauzas> because we need his advice
15:18:31 <edleafe> bauzas: yes
15:18:44 <n0ano> let's keep an eye out for him on IRC and see if we can setup an ad-hoc channel
15:18:53 <n0ano> bauzas, when will you be quitting for the day?
15:19:02 <bauzas> please all keep in mind we're a distributed team with chinese and EU people
15:19:16 <bauzas> I'm here for around 2 hours left
15:19:26 <bauzas> and then, possibly here
15:19:40 <edleafe> #action Track down jaypipes and set up an IRC discussion with bauzas, edleafe, n0ano, and anyone else interested
15:19:45 <bauzas> provided that doesn't conflict with personal concersn
15:20:00 <bauzas> n0ano: could you take the action ?
15:20:10 <n0ano> edleafe, I'm good almost any time
15:20:14 <bauzas> edleafe: you can't add actions, you're not chairing the meeting
15:20:21 <edleafe> maybe set it up for early tomorrow morning in the US?
15:20:30 <edleafe> bauzas: ah
15:20:31 <alex_xu> it hard to catch Jay, so don't think about me, I can check the log
15:20:36 <n0ano> bauzas, yes, I'll look for jay and try and setup something soon
15:20:53 <n0ano> #action n0ano to setup ad-hoc IRC when all available
15:21:12 * n0ano action doesn't work for me either, go figure
15:21:23 <edleafe> time to move on?
15:21:33 <n0ano> anyway, let's postpone this for now, moving on
15:21:44 <n0ano> #topic items for mid-cycle
15:22:07 <bauzas> I made a few notes on the etherpad
15:22:13 <n0ano> have we thought of things we want to raise at the mid-cycle, noting this will be stuff we will be targeting for L
15:22:28 <edleafe> link for the etherpad?
15:22:39 <bauzas> https://etherpad.openstack.org/p/kilo-nova-midcycle
15:22:40 * edleafe seems to have misplaced it
15:22:50 <edleafe> thx
15:23:45 <edleafe> n0ano: I added an item about discussing the interface questions
15:23:52 <n0ano> we should all think about some specifics for `next step', that's what I want but it's a little open
15:23:59 <n0ano> edleafe, that's good
15:24:05 <bauzas> edleafe: well, I was retaining the "next step" approach
15:24:12 <edleafe> I think it would be helpful to get a clearer picture of where we want to end up after the split
15:24:22 <bauzas> next step being Lxxx :)
15:24:29 <bauzas> hope it will be Liberty :)
15:25:37 <edleafe> bauzas: next steps are good too, but I was thinking about clarifying where we want to be after the next 10 steps :)
15:25:55 <bauzas> edleafe: I'm fine with you, anyway that's midcycle
15:27:05 <bauzas> that's also something we can tackle during midcycle
15:28:05 <n0ano> although thinking about it, general terms for the schedule are good, we should talk about specifics at the meetup
15:28:57 <bauzas> agreed
15:29:33 <n0ano> #action all to update the mid-cycle meetup etherpad with `general` ideas
15:30:22 <bauzas> sounds good
15:30:25 <bauzas> are we done ?
15:30:29 <n0ano> almost
15:30:33 <n0ano> #topic opens
15:30:39 <bauzas> oops :)
15:30:40 <n0ano> anything new from anyone today?
15:31:06 <bauzas> coding, coding, coding, fixing, releasing, coding, coding
15:31:19 <n0ano> reviewing, reviewing, ...
15:31:28 <edleafe> reviewing, reviewing...
15:31:32 <edleafe> :)
15:31:47 <n0ano> OK, winding down, I'll tnx everyone and we will be on IRC again - soon
15:31:53 <n0ano> #endmeeting