14:00:37 <n0ano_> #startmeeting nova-scheduler
14:00:38 <openstack> Meeting started Mon Feb  1 14:00:37 2016 UTC and is due to finish in 60 minutes.  The chair is n0ano_. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:39 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:41 <openstack> The meeting name has been set to 'nova_scheduler'
14:00:49 <n0ano_> anyone here to talk about the scheduler?
14:00:50 <edleafe> \o
14:00:56 <Yingxin> o/
14:01:33 <n0ano_> edleafe, looks like you had a long, involved meeting last week :-)
14:01:49 <edleafe> it was tough, but I made it through somehow
14:02:03 <n0ano_> :-)
14:03:36 <n0ano_> well, it's fast approaching 5 after, let's get started
14:03:45 <n0ano_> #topic mid-cycle meetup report back
14:03:56 * bauzas waves
14:03:57 <n0ano_> was anyone at the meetup who can comment on it?
14:03:57 <bauzas> I was having some IRC bouncer issue
14:04:15 <bauzas> I was
14:04:22 <n0ano_> bauzas, I always just blame microsoft, it's always their fault somehow.
14:04:22 <bauzas> not sure I'm the only folk here
14:04:44 <n0ano_> edleafe, & I didn't make it, you might be the only one here who attended
14:04:52 <johnthetubaguy> I am lurking
14:04:54 <bauzas> oh snap :)
14:05:02 <bauzas> sooo
14:05:16 <johnthetubaguy> I am writing a quick midcycle summary to the ML at the moment
14:05:20 <bauzas> all the tracked records are in https://etherpad.openstack.org/p/mitaka-nova-midcycle
14:05:37 <bauzas> for the sched bits, we mostly discussed about three things
14:05:40 <n0ano_> cooo, any high points that apply to the scheduler
14:06:13 * carl_baldwin lurking...
14:06:24 <bauzas> #1 the status of all our changes => L264
14:07:01 <bauzas> #2 the longest blueprint series ever, aka. jay's resource-providers
14:07:03 <bauzas> et al.
14:07:21 <bauzas> #3 having scheduler functional tests in-tree
14:07:59 <bauzas> about #1, we're on-board with what we promised, except check-destination-on-migrations which I'm taking it back
14:08:16 <bauzas> for #2, it took us mostly 3 days to get it covered
14:08:34 <n0ano_> but I see jay's BP finally got merged
14:08:51 <bauzas> that's very large, so I'd prefer folks lurking here to read L102 of https://etherpad.openstack.org/p/mitaka-nova-midcycle
14:08:59 <bauzas> n0ano_: 2 of 7
14:09:47 <n0ano_> we should update https://etherpad.openstack.org/p/mitaka-nova-spec-review-tracking to reflect all 7 then
14:09:52 <bauzas> and #3, it's totally possible to have functional tests in-tree for the scheduler like gibi_ made for servergroups
14:10:29 <bauzas> n0ano_: not sure it's the right etherpad, but yes to that
14:10:54 <bauzas> I was more thinking of https://etherpad.openstack.org/p/mitaka-nova-priorities-tracking
14:11:01 <bauzas> s/more/rather
14:11:10 <n0ano_> it's the one I've been going by, as long as we have a definitive list somewhere that's what's important
14:12:12 <bauzas> one point I forgot to mention is the intersection with carl_baldwin's spec for Tenant networks in Neutron
14:13:01 <bauzas> that interaction is mostly covered by https://review.openstack.org/#/c/253187/
14:13:07 <carl_baldwin> Thanks for the discussion there.  My spec is here:
14:13:14 <bauzas> oh man
14:13:18 * bauzas typing too fast
14:13:27 <bauzas> s/tenant networks/routed networks
14:13:36 <carl_baldwin> #link https://review.openstack.org/#/c/263898/
14:13:46 <bauzas> thanks
14:14:03 <bauzas> that's it for me
14:14:05 <bauzas> questions ?
14:14:38 <carl_baldwin> Jay added a section on it to his spec, can't remember which one of the series.
14:14:47 <bauzas> carl_baldwin: the one I mentioned above
14:14:53 <bauzas> ie. 253187
14:15:04 <carl_baldwin> Thanks!
14:15:05 <bauzas> yet subject to review
14:15:38 <n0ano_> sounds like there's no major problems, just a lot of work that needs to be done
14:16:02 <Yingxin> Is there a detailed test requirement of #3? I'm interested in it.
14:16:34 <bauzas> Yingxin: that has been replied by ML, lemme find it
14:17:25 <Yingxin> bauzas: is it http://lists.openstack.org/pipermail/openstack-dev/2016-January/085182.html ?
14:17:26 <bauzas> http://lists.openstack.org/pipermail/openstack-dev/2016-January/085107.html
14:17:37 <bauzas> yup
14:17:58 <bauzas> which I totally forgot to mention, although I remember having reviewing that :=)
14:18:55 <Yingxin> yes, so any idea about the requirement I drafted?
14:18:59 <n0ano_> Yingxin, and you're mentioned specifically in that thread so it's great that you're still interested
14:19:20 <johnthetubaguy> carl_baldwin: thanks for the updates on that, I added a few comments yesterday
14:19:37 <Yingxin> n0ano_: :)
14:19:53 <carl_baldwin> johnthetubaguy: just reading then this morning.  Thank you.
14:20:37 <bauzas> Yingxin: which draft?
14:21:23 <Yingxin> #link http://lists.openstack.org/pipermail/openstack-dev/2016-January/085182.html
14:21:44 <Yingxin> bauzas: the input output and boundary things.
14:22:21 <bauzas> Yingxin: well, I don't disagree with your email :)
14:23:12 <bauzas> Yingxin: what I forgot was https://github.com/openstack/nova/blob/master/nova/tests/functional/test_server_group.py
14:23:40 <Yingxin> bauzas: thanks I'm looking at it
14:23:57 <bauzas> moving on ?
14:24:11 <n0ano_> bauzas, looks like
14:24:20 <n0ano_> #topic Specs/BPs/Patches
14:24:33 <n0ano_> Any of these need discussing today?
14:24:58 <Yingxin> I implemented a prototype as described in https://blueprints.launchpad.net/nova/+spec/eventually-consistent-scheduler-host-state and proposed a related session in Austin summit.
14:25:08 <Yingxin> The prototype shows a great decision time improvement and the decreasing chances of retries. It could be a shared-state version of filter scheduler.
14:25:34 <Yingxin> However, according to bp https://review.openstack.org/#/c/271823/1/specs/mitaka/approved/resource-providers-scheduler.rst  , it will remove host-state and make claims directly to db in the future.
14:25:47 <Yingxin> So I'm not sure whether I should continue this effort.
14:26:21 <n0ano_> hmm, without knowing the details sounds like that's an excellent Austin topic
14:26:38 <Yingxin> n0ano_: thanks
14:27:17 <n0ano_> Yingxin, you can start a ML discussion if you want to get some feedback before then
14:27:50 <Yingxin> ok
14:28:11 <n0ano_> if there's nothing else on this
14:28:34 <n0ano_> #topic bugs
14:28:54 <n0ano_> beyond encouraging everyone to take a bug (and fix it) I have some good news
14:29:13 <bauzas> Yingxin: so, resource-providers-scheduler is subject to change
14:29:21 <bauzas> Yingxin: at least where the claims are done
14:29:51 <bauzas> Yingxin: I'm more concerned by how your BP could be impacting the main effort of the resource-providers epic
14:31:03 <Yingxin> bauzas: well it can also apply to resource-providers if the scheduler needs in-memory cache
14:31:36 <Yingxin> And when there is a requirement to have multiple scheduler instances.
14:31:52 <bauzas> Yingxin: I'd be interested in seeing the prototype
14:32:55 <n0ano_> Yingxin, would it be possible to post a WIP gerrit review for your prototype?
14:33:25 <Yingxin> bauzas: thanks, it is only a quick prototype, no unit tests and may have naming problems
14:33:52 <bauzas> Yingxin: that's not really a problem :)
14:34:00 <bauzas> just mark it WIP / DNM
14:34:04 <bauzas> and put the -W button
14:34:06 <Yingxin> I'll modify it before it is uploaded to gerrit
14:34:16 <Yingxin> well, OK
14:34:49 <Yingxin> The most important thing is that it is runnable.
14:34:52 <bauzas> but if your prototype wouldn't throw the resource-providers epic under the bus, it could be interesting to see it in Newtopn
14:36:38 <Yingxin> Yup, so I'll continue
14:37:16 <n0ano_> so, back to bugs
14:37:28 <n0ano_> Intel has a large group in Austin and I've talked management into prioritizing scheduler bugs for that group, we should be getting some progress on reducing the scheduler bugs list (soon)
14:38:13 <n0ano_> #topic opens
14:38:25 <n0ano_> so, anyone have anything else they'd like to discuss?
14:38:29 <edleafe> n0ano_: Austin or San Antonio?
14:38:44 <n0ano_> edleafe, both in Texas, what's the question?
14:39:13 <edleafe> n0ano_: I knew someone hired to work in SA for Intel
14:39:13 <n0ano_> edleafe, my bad, yes the group is ini San Antoio
14:39:29 <edleafe> n0ano_: ah, just wondered if he would have to move :/
14:39:32 <n0ano_> they're Texas cities, who can keep them straight :-)
14:40:35 <n0ano_> I'm trying to blot it out of my mind, Intel tried to send me there for a 6 month rotation (I talked them out of it)
14:40:48 <bauzas> n0ano_: some people are trying to group their efforts, company-unbiased
14:41:05 <bauzas> n0ano_: you should talk to markus_z
14:41:18 <bauzas> I remember he made a call for help
14:41:39 <n0ano_> WFM, I just want to get progress on reducing the bug list
14:42:11 <bauzas> n0ano_: http://lists.openstack.org/pipermail/openstack-dev/2016-January/083456.html
14:43:17 <n0ano_> bauzas, tnx, I'll see if we can't get some help for that
14:43:59 <n0ano_> anything else for today?
14:44:38 <n0ano_> hearing crickets
14:45:00 <n0ano_> OK, tnx everyone, talk to you all next week
14:45:04 <n0ano_> #endmeeting