15:00:02 <nijaba> #startmeeting Ceilometer
15:00:02 <nijaba> #meetingtopic Ceilometer
15:00:02 <nijaba> #chair nijaba
15:00:02 <nijaba> #link http://wiki.openstack.org/Meetings/MeteringAgenda
15:00:03 <openstack> Meeting started Thu Nov 29 15:00:02 2012 UTC.  The chair is nijaba. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:07 <openstack> The meeting name has been set to 'ceilometer'
15:00:09 <openstack> Current chairs: nijaba
15:00:12 <nijaba> ATTENTION: please keep discussion focussed on topic until we reach open discussion topic
15:00:12 <nijaba> Hello everyone! Show of hands, who is around for the ceilometer meeting?
15:00:12 <nijaba> o/
15:00:17 <yjiang5_away> o/
15:00:25 <eglynn> o/
15:00:31 <n0ano> o/
15:01:04 <nijaba> #topic actions from previous meeting
15:01:04 <nijaba> #topic eglynn drive fleshing out of nova-virt API to completion
15:01:18 <eglynn> So lots of circular discussion on the upstream ML
15:01:31 <Pete_> o/
15:01:38 <eglynn> and we discussed this further on #openstack-metering yesterday
15:01:46 <eglynn> the new preferred approach is to code directly to the underlying hypervisor layer
15:01:54 <eglynn> (e.g. the libvirt API)
15:02:09 <eglynn> so as to avoid the release mgmt / versioning complexities of nova releasing a separate library
15:02:09 <nijaba> how does this differ from what we have today?
15:02:35 <eglynn> we currently code to the hypervisor driver in nova
15:02:49 <eglynn> (not to the libvirt-python API)
15:03:04 <nijaba> ah, k
15:03:05 <eglynn> the latter is packaged as a separate library
15:03:13 <eglynn> (independently versioned etc.)
15:03:31 <dhellmann> o/
15:03:36 <nijaba> but would we be able to retrieve openstack context that way (ie meta-data)?
15:03:37 <eglynn> we only use a tiny chunk of the nova hypervisor driver
15:03:38 <n0ano> does libvirt control Xen & qemu VMs in addition to KVM?
15:03:53 <eglynn> n0ano: it can do
15:04:10 <eglynn> so the idea would be also take a similar approach with xenapi etc.
15:04:11 <n0ano> does that mean it doesn't right now?
15:04:19 <eglynn> no
15:04:22 <nijaba> n0ano: it's one of 2 possibilities for xen, and it is the case for lxc
15:04:34 <n0ano> OK, but what about qemu?
15:04:35 <eglynn> it can do, if configured to do so
15:05:10 <nijaba> eglynn: not sure you got to answer my q:
15:05:12 <eglynn> yeah qemu too
15:05:12 <nijaba> but would we be able to retrieve openstack context that way (ie meta-data)?
15:05:31 <eglynn> nijaba: so that's one wrinkle
15:05:39 <nijaba> a big one...
15:05:48 <eglynn> the libvirt domain representation doesn't give us the flavor or display name
15:06:05 <dhellmann> those are pretty important details for billing
15:06:13 <eglynn> (both of which we need for metering, certainly the flavor for the instance.<type> meter is needed, display name could be dropped maybe...)
15:06:20 <nijaba> hmmm... important as in essential, yes
15:06:21 <eglynn> so we'll still have to call out to nova-api to get this info
15:06:28 <eglynn> (though we may not have to call out to it on every polling cycle)
15:06:31 <dhellmann> otoh, if we query nova for a list of instances first we wouldn't have to worry about the formula for vm names, right?
15:06:37 <eglynn> i.e. only when we see a new instance UUID
15:07:14 <nijaba> eglynn: ah, and then we could retrieve the info through standard api calls?
15:07:22 <dhellmann> how would we see that? through notifications?
15:07:36 <eglynn> nijaba: standard API, as in nova public API?
15:07:38 <yjiang5_away> dhellmann: Through novaclient or notification, either way
15:07:53 <nijaba> eglynn: yep?
15:07:57 <eglynn> nijaba: sure
15:08:11 <eglynn> nijaba: (that's how we list the instances currently)
15:08:16 <dhellmann> does the nova api allow queries for instances on a given host?
15:08:23 <eglynn> (i.e. we no longer go direct to the nova DB as before)
15:08:41 <yjiang5_away> dhellmann: that's curent method, nova client to get all instance on given host
15:08:53 <dhellmann> yjiang5_away: current where? in ceilometer?
15:09:05 <dhellmann> I thought we were using an internal API to talk to nova now.
15:09:06 <eglynn> yep it does
15:09:23 <yjiang5_away> dhellmann: yes, I remember we get instance for one host through novaclient?
15:09:27 <eglynn> dhellman not to grab the instance list, that goes thru the public nova API
15:09:32 <eglynn> dhellmann ^^^
15:09:39 <dhellmann> eglynn: ah, I thought everything was internal
15:09:54 <dhellmann> ok, so we have a way to get the data without relying on internals, we just need to work out how often
15:09:58 <eglynn> so it used to be grabbed via the nova DB, but that changed recently
15:10:18 <eglynn> dhellmann: so I'm thinking every time we see a new instance UUID
15:10:23 <dhellmann> I suggest we start simple and continue to ask every time we start to poll, then optimize as a separate step after we have the libvirt wrapper built
15:10:33 <eglynn> yep reasonable
15:10:40 <dhellmann> eglynn: I don't know what you mean by "see a new instance UUID", though
15:10:49 <dhellmann> what bit of code is seeing it?
15:11:18 <eglynn> dhellmann: when libvirt reports an instance UUID that wasn't included in the instances described by the *last* call to the public nova API
15:11:28 <dhellmann> aha, so libvirt knows the uuid?
15:11:29 <nijaba> right, we don't currently maintain any list, IIRC...
15:11:34 <dhellmann> I thought it only had a "name" or something
15:11:35 <eglynn> dhellmann: yep
15:12:13 <eglynn> dhellmann: nova uuid, and a libvirt ID, and a formated instance-000000%x tyep name
15:12:14 <dhellmann> easy, then, we can just ask libvirt for instances and then ask nova for details as we discover new ones
15:12:21 <eglynn> exactomundo!
15:12:32 <yjiang5_away> eglynn: we need check  nova code invoked by ceilometer currently, to see if any thing nova added after they got information from libvirt?
15:12:44 <n0ano> so, if I understand this, we're now requiring libvirt for things to work, if you configure to use xenapi you won't get this info
15:12:45 <nijaba> ok, so who takes the action to transform the agent that way?
15:13:03 <eglynn> n0ano the idea to do the same for xenapi also
15:13:09 <nijaba> n0ano: it's already like this anyway.  we currently only spport libvirt
15:13:19 <eglynn> now this may not be quite as neat with the xenapi version
15:13:28 <eglynn> (where there's some RRD file parsing needed to get the hypervisor stats reliably)
15:13:37 * eglynn needs to look into that in more detail
15:13:53 <eglynn> yjiang5_away: I didn't understand the question
15:14:03 <n0ano> just so long as we don't forget the hypervisor APIs
15:14:18 <n0ano> s/hypervisor/other hypervisor
15:14:18 <eglynn> so I have a prototype of this working for libvirt, need to polish a bit more before proposing for review
15:14:25 <eglynn> n0ano: agree
15:14:40 <eglynn> should have a patch tmrw
15:14:42 <yjiang5_away> eglynn: currently we invoke nova code to fetch libvirt information,right? Did nova added anything before return to ceilometer? If yes, that added stuff should be handled by ceilometer then.
15:14:59 <nijaba> eglynn: cool, thanks.  should you action yourself on this, of you may need more than a week?
15:14:59 <dhellmann> n0ano: yep, we want to support the other hypervisors, eventually
15:15:04 <jd__> hi
15:15:11 <nijaba> hey jd__!
15:15:29 <jd__> :)
15:15:49 <eglynn> yjiang5_away: so I've removed the direct nova usage, instead calling libvirt API directly, but based on the original nova code (the v. small subset we use...)
15:15:57 <dhellmann> eglynn: is there a blueprint for this stuff? that may make more sense than an action item
15:16:12 <eglynn> yjiang5_away: so I don't think I've missed any nova "secret sauce"
15:16:12 <nijaba> dhellmann: true
15:16:18 <eglynn> dhellmann: I'll file a BP
15:16:20 <yjiang5_away> eglynn: great.
15:16:23 <nijaba> thanks
15:16:26 <dhellmann> eglynn: cool, thanks
15:16:29 <eglynn> np!
15:16:31 <nijaba> shall we move on?
15:16:43 <eglynn> cool
15:16:45 <dhellmann> also, eglynn, double extra plus thanks for taking the lead on this. I'm *so* tired of nova breaking us.
15:16:56 <eglynn> yeah, I hear ya!
15:17:04 <nijaba> #topic yjiang5 to start a thread on transformer
15:17:17 * nijaba thanks eglynn warmly too
15:17:28 <yjiang5_away> nijaba: yes, send mail to ML for discussion
15:17:48 <nijaba> any conclusion to report?
15:18:12 <jd__> I think we now have a good overview of what we want
15:18:13 <yjiang5_away> no objection to the transfomer method, only concern is dhellmann think it's too flexible
15:18:28 <jd__> yeah, how "too flexible" can be a concern, really, dhellmann ;)
15:18:46 <nijaba> dhellmann: really ? ;)
15:18:51 <dhellmann> I'm still a little worried about the notion of chaining transformers, and the complexity of configuring that. But I'm content to wait for a patch.
15:19:17 <yjiang5_away> I plan to use this method to finish CW publisher and then sent out for discussion. At that time, we will have a solid base patch for future discussion.
15:19:36 <jd__> works for me
15:19:39 <dhellmann> the more complicated it is, the more likely someone will have trouble setting it up
15:19:40 <eglynn> yjiang5_away: cool
15:19:49 <dhellmann> yjiang5_away: excellent
15:20:02 <yjiang5_away> dhellmann: yes, so I think mostly they should use default one.
15:20:17 <nijaba> yjiang5_away: action?
15:20:24 <dhellmann> yjiang5_away: right. but then what's the point of building a configuration system? :-)
15:20:55 <yjiang5_away> dhellmann: some advanced user, or someone may want to add special metrics themselves through configuration
15:20:58 <jd__> dhellmann: evolution! :)
15:21:32 <yjiang5_away> nijaba: yes, my action to send patches for transformer
15:21:50 <dhellmann> yjiang5_away: OK. I just prefer to add features one at a time. Multiple publishers first, then deployer configuration for them.
15:21:51 <nijaba> #action yjiang5_away to send patches for transformer
15:21:51 <eglynn> so in terms of configuration, should be mostly pre-canned right?
15:21:55 <nijaba> #topic eglynn to report on synaps' decision
15:22:02 <eglynn> the synaps guys have come to a conclusion
15:22:12 <eglynn> and they want to come on board
15:22:16 <eglynn> w00t!
15:22:19 <nijaba> \o/
15:22:25 <dhellmann> +1
15:22:29 <shardy> +1 :)
15:22:40 <eglynn> but have limited bandwidth initially (other internal demands on their time)
15:22:40 <jd__> great :)
15:22:52 <dhellmann> don't we all
15:22:52 <eglynn> so I'm proposing to have the conversation now on the upstream ML
15:23:02 <nijaba> that's fine.  Let's make sure we help them feel "at home"
15:23:04 <eglynn> (re-)invite them on board and discuss the mechanics of bringing their code under the ceilo umbrella
15:23:10 <yjiang5_away> dhellmann: yes, I'm now working on multiple publisher
15:23:39 <eglynn> so we'll have a good starting point I think for a standalone monitoring solution
15:23:58 <eglynn> I have a lundry list of initial tasks that I'll translate to blueprints
15:24:34 <eglynn> #action eglynn kick off Synaps discussion on upstream ML
15:24:44 <nijaba> #topic nijaba to get rid of roadmap page content to point to lp
15:24:44 <nijaba> This was done, see:
15:24:44 <nijaba> #link http://wiki.openstack.org/EfficientMetering/RoadMap
15:25:06 <nijaba> #topic dhellmann test more lenient anyjson support for folsom and grizzly compatibility
15:25:14 <nijaba> I think this was done and merged
15:25:36 <dhellmann> yes, that's in now
15:25:36 <nijaba> dhellmann: correct?
15:25:40 <dhellmann> or at least up for review
15:25:46 <nijaba> perfect, thanks!
15:25:52 <nijaba> #topic dhellmann update user-api blueprint
15:25:55 <dhellmann> just checked and it has merged
15:26:18 <dhellmann> I did update that blue print with a link to a wiki page where I described the work a bit
15:26:37 <nijaba> dhellmann: http://wiki.openstack.org/spec-ceilometer-user-api?
15:26:49 <dhellmann> yes
15:26:57 <nijaba> #link http://wiki.openstack.org/spec-ceilometer-user-api
15:27:32 <nijaba> any action, appart from coding, left?
15:27:40 <dhellmann> no, I don't think so
15:27:47 <dhellmann> I'm waiting to start coding until my WSME conversion is completed
15:27:53 <nijaba> thanks!
15:27:55 <dhellmann> baby steps...
15:27:56 <nijaba> #topic dhellmann update pecan port blueprint
15:28:06 <dhellmann> I've updated the blueprint with some details
15:28:09 <nijaba> dhellmann: you had a lot of action!
15:28:23 <dhellmann> I have the port to pecan complete, but ran into an issue with WSME so haven't finished that part yet
15:28:41 <dhellmann> I hope to have that done before I go on vacation next weekend
15:28:41 <nijaba> #link http://wiki.openstack.org/spec-ceilometer-api-server-pecan-wsme
15:29:07 <dhellmann> then we can look at asalkeld's ideas for changes to the API and terminology (most of which I really like)
15:29:22 <nijaba> sounds good
15:29:37 <eglynn> cool
15:29:43 <jd__> "action week"
15:29:50 <nijaba> #topic zykes to report status on Bufunfa / BillingStack port
15:29:55 * dhellmann is cramming work in before going out for 2 weeks
15:30:10 <eglynn> zykes- there?
15:30:27 * nijaba think dhellmann won't be able to do any work after having dinner with jd and him
15:30:45 <nijaba> does not look like he is around, let's move on
15:30:49 * dhellmann wasn't planning on it
15:30:57 <nijaba> #topic Bug squashing day
15:30:57 <nijaba> It was proposed to organize a bug squashing day, or bugfest...
15:31:12 <nijaba> so, when should this be?
15:31:28 <eglynn> what's our bug queue length like?
15:31:28 <nijaba> just before g2 release?
15:31:42 <dhellmann> just before g2, during g3, both?
15:31:45 <eglynn> it's good to have a pre-prepared list of low hanging fruit
15:32:04 <eglynn> (for noobs who happen upon the bug squshing day ...)
15:32:09 <nijaba> eglynn: 64 open bugs of all sorts
15:32:17 <jd__> nijaba: :))
15:32:43 <eglynn> so we should triage this queue and try to identify some nice self-contained fixes
15:32:59 <jd__> a first one I think before g2 would be good idea
15:33:20 <eglynn> (and resist the temptation to fix them ourselves!)
15:33:27 <eglynn> jd__ agree
15:33:56 <nijaba> eglynn: only 9 of them need triaging (new)
15:34:11 <eglynn> cool
15:34:20 <nijaba> 30 are either new, confirmed or triaged, quite a few of them wishlist
15:34:58 <eglynn> so at least two things we can acheive with such a bug day: 1. drive quality 2. find some new active contributors
15:35:09 <eglynn> for #2, wishlist low-priority bugs are fine
15:35:13 <dhellmann> 12 are effort-s, too
15:35:37 <nijaba> ok, so g2 is jan 10.  bug day on jan 7th (monday)?
15:35:55 <eglynn> hmmm, that seems a bit tight
15:36:08 <eglynn> maybe late on the week before?
15:36:18 <dhellmann> 3rd or 4th?
15:36:18 <nijaba> eglynn: I am afraid of hangover or vacations if we do that the week before
15:36:26 <eglynn> (otherwise we may be getting jumpy about regressions)
15:36:40 <eglynn> nijaba: a-ha, Ok, forgot about that ;)
15:37:03 <jd__> oh I think 3rd or 4th should be almost hangover free
15:37:05 <jd__> not sure about vacations though
15:37:38 <nijaba> ok, let's vote then ;)
15:37:40 <jd__> anyway, both are fine for me
15:37:41 <eglynn> so if we go with the 7th, some of the bug day fixes will prolly miss g2
15:37:49 <eglynn> (but maybe that's OK)
15:38:06 <eglynn> (g3 will be along soon enough...)
15:38:09 <yjiang5_away> eglynn: agree
15:38:13 <nijaba> #startvote when to do bugday? jan4, jan7
15:38:14 <openstack> Begin voting on: when to do bugday? Valid vote options are jan4, jan7.
15:38:15 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
15:38:25 <nijaba> #vote jan7
15:38:30 <yjiang5_away> #vote jan7
15:38:37 <eglynn> #vote jan4
15:38:38 <n0ano> #vote jan4
15:38:39 <dhellmann> #vote jan4
15:38:53 <jd__> #vote jan4
15:38:56 <nijaba> 20 sec countdown
15:39:09 <nijaba> #endvote
15:39:10 <openstack> Voted on "when to do bugday?" Results are
15:39:11 <openstack> jan4 (4): jd__, n0ano, dhellmann, eglynn
15:39:12 <openstack> jan7 (2): yjiang5_away, nijaba
15:39:28 <jd__> eat that January 7th!
15:39:30 <nijaba> #agreed bugday on jan 4th
15:39:42 * n0ano we have a mandate :-)
15:40:05 <nijaba> ok, I guess we'll define the date for g3 later?
15:40:14 <nijaba> based on our experience?
15:40:27 <eglynn> yep makes sense
15:40:31 <dhellmann> nijaba: +1
15:40:40 <dhellmann> maybe not so close to the deadline next time :-)
15:40:42 <nijaba> let's move on then
15:40:50 <nijaba> #topic Review blueprints and progress
15:41:16 <nijaba> so, I think we should define a close date for blueprint validation
15:41:22 <nijaba> g2 and g3 are coming up fast
15:41:43 <nijaba> so I think if we have not agreed on a bp fore dec 15th, we should probably can it....
15:41:58 <dhellmann> that seems reasonable
15:42:05 <dhellmann> what's the process for "agreeing"?
15:42:20 <dhellmann> discusson the list?
15:42:25 <jd__> I guess
15:42:27 <eglynn> that's a deadline for g2 only? (or for grizzly in general?)
15:42:37 <nijaba> we don't have a formal agreement process, but consensus on the features and how to achve it is what we have done bfore
15:42:46 <nijaba> eglynn: in general
15:42:49 <eglynn> k
15:43:03 <dhellmann> ok, I just want to make sure I get the agreements I need
15:44:07 <nijaba> #startvote agree on dec 15th for bluprint "freeze"? yes, no, abstain
15:44:08 <openstack> Begin voting on: agree on dec 15th for bluprint "freeze"? Valid vote options are yes, no, abstain.
15:44:09 <openstack> Vote using '#vote OPTION'. Only your last vote counts.
15:44:16 <nijaba> #vote yes
15:44:25 <dhellmann> #vote yes
15:44:31 <eglynn> #vote yes
15:44:32 <jd__> #vote yes
15:44:32 <n0ano> #vote yes
15:44:35 <yjiang5_away> #vote yes
15:44:55 <nijaba> #endvote
15:44:56 <openstack> Voted on "agree on dec 15th for bluprint "freeze"?" Results are
15:44:57 <openstack> yes (6): n0ano, jd__, nijaba, eglynn, yjiang5_away, dhellmann
15:45:09 <nijaba> #agreed blueprint freeze on dec 15th
15:45:24 <nijaba> #topic Discuss suggested re-narrowing of project scope for grizzly versus user-oriented monitoring
15:45:45 <eglynn> so this topic was motivated by markmc's observations on the ML: http://lists.openstack.org/pipermail/openstack-dev/2012-November/003348.html
15:45:50 <nijaba> I think this was proposed by someone outside of the project
15:45:59 <nijaba> I am currently not too afraid of this
15:46:06 <nijaba> what do you guys think?
15:46:17 <nijaba> specially now that we have a bp freeze
15:46:53 <dhellmann> I think some of this was brought about by the "thrashing" of the discussion of how to handle monitoring
15:47:09 <eglynn> well Synaps will give us a (user-oriented) monitoring service, and multi-publish will give the ability to push metrics into that
15:47:20 <dhellmann> yes, so now we have a more defined direction
15:47:42 <eglynn> but I agree that extending ourselves to also cover system oriented monitoring / instrumentation now would be a bridge too far
15:47:51 <eglynn> (for grizzly anyway)
15:48:08 <nijaba> eglynn: on this I tend to agree, but we don't even have the start of a bp for it
15:48:14 <eglynn> by system oriented monitoring I mean cloud-operator-focussed
15:48:15 <dhellmann> yes, I think we need to start differentiating between short term and long term goals for these new features
15:48:20 <nijaba> so I was seeing it as dicussions for h
15:48:23 <n0ano> I'm more interested in system monitoring but I'm more than willing to address that post-grizzly
15:48:40 <eglynn> cool
15:48:42 <nijaba> but we should definitily continue the discussion
15:48:50 <dhellmann> agreed
15:48:54 <eglynn> yep
15:49:04 <nijaba> ok, cool!!!
15:49:09 <n0ano> discussion - yes, actively push code upstream - not just yet
15:49:09 <jd__> agreed
15:49:20 <nijaba> #topic multi-publisher blueprint
15:49:26 <jd__> until there's any code, that's just kind of useless discussion anyway :)
15:49:44 <yjiang5_away> I'm working on the patch now
15:49:48 <nijaba> I think we are getting close to have an agreement on this on right?
15:49:56 <eglynn> cool
15:49:58 <yjiang5_away> nijaba: yes
15:49:59 <jd__> yeah, yjiang5_away is doing the grunt work :)
15:50:06 <nijaba> thanks yjiang5_away
15:50:14 <yjiang5_away> :)
15:50:17 <eglynn> seconded!
15:50:30 <nijaba> #topic multi-dimensions bp
15:50:43 <dhellmann> link?
15:50:46 <nijaba> so this is a new one that jd and I came up with
15:51:08 <dhellmann> #link http://wiki.openstack.org/Ceilometer/blueprints/multi-dimensions
15:51:10 <nijaba> #link https://blueprints.launchpad.net/ceilometer/+spec/multi-dimensions
15:51:15 <nijaba> hehe
15:51:54 <nijaba> so the question left on this one is wether we need to express complex dimension with & nd | or not
15:52:11 <nijaba> I think we could propose only & for a first version
15:52:22 <jd__> I think too
15:52:24 <eglynn> agreed
15:52:26 <dhellmann> nijaba: +1
15:52:27 <nijaba> and extend it later if needed
15:52:36 <jd__> or can be achieved with multiple request easily for now
15:52:43 <dhellmann> I don't care for the syntax described, but the intent is good
15:53:12 <nijaba> #action nijaba to update the bp to specify complex request in a future version
15:53:22 <nijaba> dhellmann: feel free to prppose a syntax :)
15:53:43 <dhellmann> nijaba: something more closely resembling a regular GET request would be easier to handle
15:54:12 <nijaba> dhellmann: that reminds me of what jd was suggesting
15:54:31 <nijaba> &key=string&key=string....
15:55:06 <dhellmann> nijaba: right. let me work up an example using a syntax WSME supports :-)
15:55:23 <nijaba> ok, I'll wait for your example then. thanks
15:55:29 <dhellmann> the controller method would get an array of dimension objects as an argument if we do it right
15:55:38 <nijaba> perfect
15:55:41 <nijaba> that was the idea
15:56:13 <dhellmann> #action dhellmann to prepare example of dimension query syntax that will work with WSME
15:56:55 <nijaba> #topic Discuss adopting asalkeld's client implementation officially
15:57:12 <nijaba> I am actually all for this proposal
15:57:16 <nijaba> anything blocking?
15:57:25 <jd__> what's asalkeld version?
15:57:38 <eglynn> so when zykes- asked about CLIs a few days ago, I was surprised there were so many
15:57:38 * dhellmann looks for link
15:57:40 <jd__> I though dhellmann had one under its foot
15:57:49 <dhellmann> #link https://github.com/asalkeld/python-ceilometerclient
15:57:56 <dhellmann> jd__: ours is much more primitive
15:58:08 <dhellmann> asalkeld used the "standard" client model
15:58:18 <dhellmann> much more complete
15:58:39 <dhellmann> we really need a client lib
15:58:39 <jd__> ok :)
15:58:40 <eglynn> sounds like the one to row in behind
15:58:48 <nijaba> anyone against?
15:59:01 <eglynn> nope
15:59:05 <jd__> didn't look at the code, but I'm probably not against anyway
15:59:06 * nijaba needs to run to deliver a speech, sorry to be in a hurry
15:59:10 <dhellmann> I'm obviously in favor
15:59:13 <jd__> can we put that into openstack'core/incubated?
15:59:33 <dhellmann> that's the idea, to have another git repo setup for it with CI integration, etc.
15:59:34 <yjiang5_away> jd__: normally client is a seperated project?
15:59:39 <jd__> yjiang5_away: yes
15:59:52 <nijaba> #agreed adopt asalkeld's client officially
15:59:55 <jd__> dhellmann: perfect, what do we have to do for that?
16:00:03 <dhellmann> jd__: dunno
16:00:06 <eglynn> yjiang5: seperate repo, but same overall project?
16:00:11 <dhellmann> talk to the infra team, I guess
16:00:15 <jd__> #action move asalkeld's client into openstack incubation/core
16:00:23 <nijaba> do you guys mind to continue the discussion on our chan?  I need to run :/
16:00:26 <dhellmann> that needs an owner
16:00:31 <jd__> #action jd__ move asalkeld's client into openstack incubation/core
16:00:31 <jd__> nijaba: #chair me I'll finish
16:00:32 <dhellmann> nijaba: I think we're done
16:00:40 <nijaba> #chair jd__
16:00:41 <dhellmann> or maybe not
16:00:41 <openstack> Current chairs: jd__ nijaba
16:00:54 <jd__> run nijaba! run!
16:00:57 <nijaba> jd__: thanks, did not know I could do that during the meeting
16:01:04 * nijaba waves and run
16:01:26 * eglynn wonders if we've any sharable presentation collateral?
16:01:55 <eglynn> (giving a talk to an Irish openstack user group next week on ceilo/monitoring etc.)
16:02:10 <jd__> #topic open discussion
16:02:14 <eglynn> http://www.meetup.com/OpenStack-Ireland/events/92413652/
16:02:17 <yjiang5_away> eglynn: I remmeber  jd__ sent some in IRC before
16:02:29 <jd__> eglynn: I've the ones we used with nijaba a few weeks ago
16:02:49 <jd__> https://docs.google.com/presentation/d/1i30roVZp00Wvo46F4k5CT98sw2uMgaf5Lh3bSfiQ-Cg/edit#slide=id.p
16:02:50 <eglynn> jd__, yjiang5: cool
16:03:18 <eglynn> thanks, I may borrow liberally ;)
16:03:37 <yjiang5_away> jd__: can we put into ceilometer launchpad?
16:04:02 <dhellmann> yjiang5_away: good idea
16:04:30 <jd__> sure
16:04:51 <yjiang5_away> jd__: thanks.
16:05:09 <dhellmann> yjiang5_away: even better, let's add a link to our docs
16:05:38 <yjiang5_away> dhellmann: yep, and eglynn can add his new one after finish the talking :)
16:05:39 <dhellmann> speaking of docs, the jenkins job to build them is now working
16:05:51 <eglynn> yjiang5: yep, will do!
16:05:54 <dhellmann> #link http://docs.openstack.org/developer/ceilometer/
16:06:04 <dhellmann> I opened a bug to fix the styling
16:06:37 <jd__> awesome
16:06:46 <dhellmann> does someone have time to talk to the doc team about whether something will link to us?
16:07:23 <dhellmann> I don't know what we have to do to get a link on the project list, for example
16:07:31 <jd__> bribe someone I guess
16:07:53 <jd__> somebody for some #action ?
16:08:15 <yjiang5_away> dhellmann: So basically contacct with doc team?
16:08:41 <dhellmann> yjiang5_away: yes, we need someone to email them and ask what to do next
16:08:54 <yjiang5_away> dhellmann: I'm glad to help
16:09:19 <dhellmann> great! want to take an #action item?
16:09:29 <yjiang5_away> dhellmann: yes, but I don't know how to #action yet :$
16:09:39 <dhellmann> just type it out :-)
16:09:54 <eglynn> yjiang5_away: #action yjiang description
16:09:59 <dhellmann> yep
16:10:09 <yjiang5_away> #action talk with doc team to link to ceilometer docs
16:10:36 <jd__> you forgot your nickname after #action I think :)
16:10:38 <yjiang5_away> #action yjiang5 talk with doc team to link to ceilometer docs
16:10:43 <dhellmann> +1
16:10:49 <jd__> that's one seems right!
16:10:56 <jd__> anything else for today?
16:11:08 <yjiang5_away> :)
16:11:39 <jd__> last call!
16:11:40 <eglynn> and that's a wrap I think ...
16:11:45 <dhellmann> I'm done
16:11:49 <jd__> #endmeeting