15:00:21 <n0ano> #startmeeting gantt
15:00:22 <openstack> Meeting started Tue Jan 28 15:00:21 2014 UTC and is due to finish in 60 minutes.  The chair is n0ano. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:23 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:25 <openstack> The meeting name has been set to 'gantt'
15:00:35 <n0ano> anyone here want to talk about the scheduler?
15:01:08 <toan-tran> I'm here, hi
15:01:12 <garyk1> hi
15:01:13 <PaulMurray> Hi n0ano
15:01:19 <PaulMurray> Hi all
15:01:33 <gilr> hi
15:02:04 <PaulMurray> Are there logs for the last meeting? the ones I can see are from 7th Jan
15:02:37 <n0ano> PaulMurray, yeah, start from http://eavesdrop.openstack.org/meetings/gantt/2014/
15:02:41 <n0ano> that should work
15:02:47 <PaulMurray> thanks
15:03:23 <n0ano> PaulMurray, sorry, the 14th was when I change the meeting name to gantt, hence the location of the logs changed
15:03:37 <n0ano> boris-42, you here?
15:03:52 <boris-42> n0ano yep
15:03:59 <n0ano> cool
15:04:02 <johnthetubaguy> hi
15:04:08 <n0ano> #topic no-db status update
15:04:11 <boris-42> n0ano so we made some benchmarks
15:04:14 <n0ano> boris-42, anything happening?
15:04:22 <boris-42> n0ano 1 controller + 128 computes
15:04:32 <boris-42> n0ano with Rally
15:04:46 <boris-42> n0ano but seems like there are other bottlenecks =)
15:04:53 <boris-42> n0ano not related to the scheduler=)
15:05:09 <n0ano> I hate it when the real word messes with my plans :-)
15:05:16 <boris-42> =))
15:05:38 <boris-42> n0ano so scheduler is not the first problem (that you will face with OpenStack=)
15:06:08 <n0ano> do you know where the other bottlencks are?
15:06:09 <johnthetubaguy> what are you finding to be the first problem?
15:06:13 <toan-tran> boris-42 it's the least problem we have :))
15:06:42 <boris-42> n0ano my another team is working around profiling stuff
15:06:56 <garyk1> boris-42: which profiling tools are you using?
15:06:56 <boris-42> n0ano so we will get "why it works so slowww..."
15:07:06 <boris-42> garyk1 we are doing our tool=)
15:07:07 <toan-tran> boris-42 what is the problem exactly, can you brief?
15:07:13 <boris-42> toan-tran idk
15:07:19 <boris-42> toan-tran everywhere =)
15:07:34 <boris-42> garyk1 we are doing own tool
15:07:40 <n0ano> (I've always said, performance is an infinite resource sink)
15:07:44 <boris-42> garyk1 it is based on ceilometer & rpc notifier
15:07:58 <garyk1> boris-42: ok, thanks. it would be nice if we all were using the same tool and then could share
15:08:12 <boris-42> garyk1 and this lib https://github.com/pboris/osprofiler
15:08:22 <boris-42> garyk1 i have to find time to finish this work
15:08:26 <garyk1> boris-42: thanks. i was looking into zipkin
15:08:38 <boris-42> garyk1 zipkin is unmergable into upstream
15:08:49 <boris-42> garyk1 so we will get the same result
15:08:54 <garyk1> boris-42: ok, thanks!
15:08:58 <boris-42> garyk1 but in upstream without & without zipkin=)
15:09:15 <n0ano> interesting point, is there an openstack profiler/performance project (and if not, should such a thing be started)?
15:09:30 <boris-42> n0ano I started doing
15:09:39 <boris-42> n0ano and almost finish it..
15:09:48 <boris-42> http://pavlovic.me/drawer.html < it will look like this
15:09:57 <boris-42> there are couple of oslo patches
15:10:05 <boris-42> and it's all
15:10:10 <garyk1> cool
15:10:18 <boris-42> garyk1 I hope I will find time to finish it
15:10:55 <n0ano> Was that BP you mentioned earlier for this, if so I can see if I can get some people interested in helping out
15:12:16 <n0ano> actually, that was a link to the github source tree, is there a BP?
15:13:40 <n0ano> boris-42, yt?
15:14:46 <n0ano> well, boris-42 seems to have dropped off, let's move on for now
15:14:53 <n0ano> #topic scheduler code forklift
15:15:34 <n0ano> Don't know if you all have been tracking but I've put in a patch for review that gets the gantt tree to pass all scheduler tests and also the tree works as a scheduler for nova
15:15:56 <n0ano> cf the commit at https://review.openstack.org/#/c/68521/
15:16:18 <n0ano> (as always, reviews are very welcome)
15:16:42 <alaski> n0ano: awesome.  I'll take a look at it when I have a moment
15:16:57 <johnthetubaguy> +1 top work
15:16:59 <n0ano> with this change we can then start the work of disentangling gantt from nova and truly turn it into a separate service
15:17:37 <n0ano> also, I've completely ignored the client side of the work, we have a bare bones tree but I don't even know where to start with that.
15:17:53 <n0ano> anyone who wants to look at the client side feel free.
15:18:02 <coolsvap> n0ano: I had started with initial things
15:18:03 <johnthetubaguy> how is tempest doing, are the tests passing when running gantt now?
15:18:38 <n0ano> johnthetubaguy, not sure, Jenkins is completely happy, how do I run tempest
15:18:54 * n0ano has noticeable holes in my openstack knowledge
15:19:12 <johnthetubaguy> ah, you need to change check-tempest-dsvm-full to use gnatt for the scheduler, using your devstack changes
15:19:32 <johnthetubaguy> it would be good to start running that against nova too, so people spot when the break gnatt with their changes to nova
15:20:15 <n0ano> that'll require some infrastructure changes, I'll need help on that
15:20:21 <n0ano> speaking of devstack...
15:20:42 <toan-tran> sometimes after updating nova I got problems with RPC API
15:20:42 <n0ano> I have the patch up for review to add gantt but nobody has looked at it
15:20:47 <n0ano> #link https://review.openstack.org/#/c/67666/
15:20:51 <toan-tran> should we precise the RPC version for gantt?
15:20:57 <garyk1> i started to look at the devstack pacth today but got sidetracked. i'll comment soon
15:21:03 <n0ano> reviews for that would be `really` good
15:21:07 <n0ano> garyk1, great tnx
15:21:12 <johnthetubaguy> n0ano: its not too bad, they have documents, and you can specify stuff that goes into devstack localrc, relating to a job config, but not done that in anger before
15:22:12 <n0ano> Robert Collins helped (as in did all the work :-) setup the gantt tree, I'm sure we can get this put in also
15:22:42 <n0ano> up until now that gantt tree didn't work as a scheduler so now is the right time to put in the tempest checks
15:23:43 <johnthetubaguy> :)
15:24:49 <n0ano> that's pretty much where we are on the forklift so moving on
15:25:14 * n0ano hit the wrong key, lost my screen for a second
15:25:31 <toan-tran> Gantt is strictly tight to nova, should we precise nova in requirements?
15:25:57 <n0ano> toan-tran, not sure what you're asking?
15:26:13 <toan-tran> the RPC version is freeze in gantt,
15:26:35 <toan-tran> it may cause some incompatibility when importing from nova
15:27:06 <n0ano> probably, I consider the importing from nova a transitional phase, the goal is to cut the cord as soon as possible
15:27:31 <n0ano> I don't want to formalize the nova import too much, I just want it to go away
15:27:57 <toan-tran> then maybe we have to put it in oslo
15:28:35 <n0ano> toan-tran, yes, I would imagine that, rather than duplicate code between nova & gantt it should go into something like oslo instead
15:29:36 <n0ano> moving on
15:29:46 <n0ano> #topic policy based scheduler
15:29:51 <n0ano> toan-tran, the floor is yours
15:29:58 <toan-tran> n0ano thanks
15:30:21 <toan-tran> I have uploaded the code of the blueprint here: https://review.openstack.org/#/c/61386/
15:30:52 <toan-tran> the purpose is to create scheduler that is capable of applying different policies in different contexts
15:31:15 <toan-tran> like an aggregate, an avaialbility zone, or for a particular (group of) user
15:31:52 <toan-tran> we came up with this after working a while with different scenario
15:32:04 <toan-tran> especially when thinking of pcloud
15:32:17 <johnthetubaguy> like the AggregateRamFilter we have today?
15:32:22 <toan-tran> yes
15:32:27 <debo_os> how are teh policies defined
15:33:00 <toan-tran> here is an example: https://review.openstack.org/#/c/61386/7/etc/nova/scheduling_policy.json.sample
15:33:21 <toan-tran> it's JSON format, as the current policy.json
15:33:39 <garyk1> toan-tran: does this overlap with work that inbar and alex are doing?
15:33:52 <toan-tran> part of it, yes
15:34:00 <Yathi> what is an example - target - effect - condition
15:34:06 <garyk1> that is https://review.openstack.org/#/c/58480/. thanks for the update
15:34:18 <toan-tran> Yathi: target = an aggregate
15:34:30 <toan-tran> Effect = RamOverProvisioning (1.5)
15:34:52 <toan-tran> Condition = all/time/exceptional condition , etc
15:35:06 <toan-tran> that's the standard of policy based management system
15:35:27 <toan-tran> garyk1 in fact when we started working with scheduler
15:35:40 <toan-tran> we looked at the multi scheduler blueprint
15:36:07 <garyk1> toan-tran: nice to hear
15:36:10 <toan-tran> the problem was that it uses flavor metadata for customizing scheduler
15:36:47 <toan-tran> then looping between  scheduler --> filter --> metadata --> scheduler  ...
15:37:05 <johnthetubaguy> so I like the policy file approach, but I assume we could have both places as data sources? And so unify the two blueprints?
15:37:59 <toan-tran> johnthetubaguy : the idea of active schedulers already comes from multi schedulers
15:38:00 <toan-tran> :)
15:38:19 <johnthetubaguy> OK, cool
15:38:29 <toan-tran> but we start design it as a policy based management system
15:38:36 <toan-tran> not as a scheduler per se
15:39:22 <toan-tran> thus it is more like a scheduling center to call other schedulers, which implemeneted as "plugins"
15:39:33 <toan-tran> each plugin is correspondent to a policy rule
15:39:41 <toan-tran> example
15:39:57 <Yathi> so i am guessing you can switch the scheduler drivers based on policy at runtime
15:40:16 <toan-tran> "service_class" = gold ==> calling plugin with parameter gold
15:40:29 <toan-tran> Yathi yes, that's the purpose
15:40:40 <toan-tran> we have implemented a plugin "generic"
15:40:48 <debo_os> so you basically have named macros or named references to drivers
15:41:08 <toan-tran> which is basically a "filter-scheduler" with customizable filters/weighers that admin can change in real time
15:41:22 <toan-tran> debo_os : yes
15:41:45 <toan-tran> I think that in some usecase like pCloud
15:41:58 <toan-tran> user would like to define their own policy for his bare-metal servers
15:42:26 <toan-tran> it would not interferre with the remaining of the datacenters
15:42:32 <Yathi> so you are tightly coupling your code to Filter scheduler then ? what happens if we have the solver scheduler driver in.. which does schuduling based on constraints  in future ?
15:42:42 <debo_os> got it ... but how would users define their policies ....
15:43:04 <toan-tran> debo_os : by using our rule languae
15:43:08 <toan-tran> language
15:43:29 <toan-tran> much like how admin use filters now
15:43:59 <toan-tran> Yathi: solver scheduler can be called also
15:44:20 <toan-tran> Yathi I think of using the rules to pass filters as parameters to solvers
15:44:43 <toan-tran> that's why I'm interested in how you use the filter/weighers list to constitute the constraint
15:45:16 <toan-tran> in fact we also figured how we can cooperate with sovler sheduler
15:45:52 <Yathi> toan-tran - agreed this is the work, some of my team members are doing currently - to add pluggable constraints - hopefully we can sync up later on
15:46:03 <toan-tran> Yathi +1
15:46:19 <toan-tran> I think solver is very interesting in term of optimization
15:46:25 <debo_os> toan-tran: any pointers to your rule language ....
15:46:38 <Yathi> I am in the process of first getting my solver scheduler code ready for review.. will work with you later on
15:46:53 <toan-tran> I documented in the doc in gerrit link
15:47:07 <debo_os> thx
15:47:26 <toan-tran> https://review.openstack.org/#/c/61386/7/doc/source/devref/scheduling_policy.rst
15:47:28 <toan-tran> here it is
15:48:28 <toan-tran> currently we implemented some plugins in the /plugins folders
15:49:12 <toan-tran> they just call filters and weighers on the host_states
15:49:18 <Yathi> Toan-tran - I like this work, and I am willing to collaborate on this further
15:49:28 <toan-tran> Yathi +1
15:49:37 <n0ano> toan-tran, OK, tnx, looks like you sparked some interest
15:49:41 <n0ano> moving on.
15:49:47 <n0ano> #topic instance groups
15:49:52 <debo_os> code up for review
15:49:53 <toan-tran> Yathi I can send you what we think of the connection between the two
15:49:54 <n0ano> garyk1, did you want to give an update
15:50:13 <debo_os> https://review.openstack.org/#/c/62557/
15:51:01 <debo_os> ok ... I can chime in for Gary if he has stepped out
15:51:12 <n0ano> debo_os, looks like, go ahead
15:51:16 <debo_os> the code is up for review  https://review.openstack.org/#/c/62557/
15:51:42 <debo_os> Right now it looks big due to the docs and the v2/v3 API in the same patch ... one comment has been to split it up
15:51:45 <garyk1> sorry, i was looking at something else
15:52:11 <n0ano> debo_os, I saw that, 3k lines is a bit daunting
15:52:49 <debo_os> +1 but its mostly docs .... and we reduced a lot of code by removing an extra layer
15:53:04 <garyk1> i guess breaking it up into smaller pieces may help with the review process.
15:53:16 <debo_os> which ws due to pre-object era design (this original code is old)
15:53:28 <debo_os> so does it make sesne to do 3 splits - v2, v3, doc
15:53:55 <n0ano> debo_os, that sounds like a good plan to me
15:53:59 <debo_os> ok
15:54:54 <n0ano> OK, moving on
15:54:58 <n0ano> #topic opens
15:55:10 <n0ano> in the last few minutes, does anyone have any opens?
15:55:58 <n0ano> hearing crickets, we'll close in...
15:56:01 <n0ano> 3...
15:56:16 <n0ano> 2...
15:56:19 <johnthetubaguy> Reviews welcome for the caching scheduler
15:56:29 <johnthetubaguy> https://review.openstack.org/#/c/67855
15:56:32 <johnthetubaguy> its rough
15:56:39 <johnthetubaguy> currently profiling and tuning stuff
15:56:58 <johnthetubaguy> but would be interested in what people think
15:57:12 <johnthetubaguy> it helps a bit in some cases
15:57:21 <gilr> toan-tran: would you publish your view of the connection between policy-based with solver on the mailing list ?
15:57:32 <gilr> we might have something to contribute there
15:58:00 <toan-tran> gilr : yes
15:58:25 <gilr> toan-tran: excellent
15:58:33 <n0ano> 1
15:58:44 <n0ano> tnx everyone, we'll talk again next week
15:58:51 <n0ano> #endmeeting