21:00:11 <eglynn> #startmeeting ceilometer
21:00:12 <openstack> Meeting started Wed Nov 20 21:00:11 2013 UTC and is due to finish in 60 minutes.  The chair is eglynn. Information about MeetBot at http://wiki.debian.org/MeetBot.
21:00:13 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
21:00:16 <openstack> The meeting name has been set to 'ceilometer'
21:00:28 <lsmola_> hello
21:00:28 <eglynn> hey everybody, show of hands?
21:00:39 <eglynn> jd__ is indisposed this week, so I'll fight the ircbot again ...
21:01:22 <eglynn> tumble weeds ... ;)
21:01:26 <lsmola_> o/
21:01:29 <gordc> o/
21:01:35 <lsmola_> my hand is here
21:01:42 <lsmola_> :-)
21:02:05 <eglynn> fairly thin attendance by the looks of it ...
21:02:13 <dhellmann> o/
21:02:16 <eglynn> I'll put it down to post-HK exhaustion ;)
21:02:16 <terriyu> o/
21:02:29 <eglynn> hey y'all
21:02:39 <eglynn> k, lets roll ...
21:02:41 <eglynn> #topic actions from last week ...
21:02:50 <eglynn> jd__ was to propose a patch on the governance gerrit for our new OpenStack Telemetry moniket
21:02:52 * dhellmann has a flakey irc connection today, so messages are coming in bunches after a delay
21:02:56 <eglynn> that was done, here's the review ...
21:03:07 <eglynn> #link https://review.openstack.org/56402
21:03:14 <eglynn> proving fairly un-controversial so far
21:03:24 <eglynn> other that the TeleWTF? jibe from markmc ;)
21:03:34 <eglynn> should land soon enough methinks
21:03:48 <eglynn> anyone with buyer's remose on the name choice?
21:04:08 <gordc> that's what google is for... to answer 'wtf' part
21:04:14 <ildikov> hi all
21:04:23 <eglynn> gordc: yep
21:04:43 <dhellmann> I like the new name, nice work eglynn
21:04:52 <eglynn> cool :)
21:05:07 <eglynn> #topic status on icehouse-1
21:05:15 <eglynn> it's looking pretty thin
21:05:21 <eglynn> #link https://launchpad.net/ceilometer/+milestone/icehouse-1
21:05:30 <eglynn> I guess because of the shorter than usual summit-->milestone-1 lead in
21:05:48 <eglynn> I think that's OK
21:05:54 <eglynn> ... probably a pattern reflected somewhat in the other projects too
21:06:13 <eglynn> release mgr's thought on this at the project meeting last night:
21:06:19 <dhellmann> yeah, ttx said the -1 milestone was set to basically include stuff that was in process already at the summit
21:06:24 <eglynn> using icehouse-1 it to clean up the slate and prepare for the real work is fine
21:06:43 * eglynn paraphrasing ^^^
21:06:45 <nealph> so really, a thin -1 release is a good thing?
21:06:48 <nealph> :)
21:06:50 <dhellmann> yes
21:07:18 <eglynn> nealph: well it's also a busy time in the cycle for those with distros to get out the door
21:07:34 <eglynn> so overall understandable
21:07:50 <eglynn> but means we'll be pretty backloaded for i2 & i3
21:08:29 <eglynn> does anyone have anything up their sleeve they'd like to target at i-1?
21:08:49 <eglynn> relaistically speaking, it would have to be more a bug fix than a BP at this late stage
21:09:14 <eglynn> nope?
21:09:24 <eglynn> k
21:09:26 <nealph> I think john (herndon)  is looking at the event
21:09:27 <dhellmann> were any features implemented after the havana cut-off without a blueprint?
21:09:35 <nealph> storage updates.
21:09:37 <dhellmann> those would be candidates to be added for i-1
21:09:56 <eglynn> dhellmann: none that I can think of
21:10:06 <dhellmann> eglynn: I didn't think so, either
21:10:35 <eglynn> nealph: those storage updates == DB migrations?
21:10:38 <eglynn> (for events)
21:11:06 <nealph> no...one sec...digging for BP.
21:12:34 <nealph> https://blueprints.launchpad.net/openstack/?searchtext=specify-event-api
21:13:10 <eglynn> so it would really need to be well progressed implementation-wise (approaching code-complete) by EoW to have a realistic chance of landing for i-1
21:13:20 <eglynn> (given the review lag etc.)
21:13:59 <nealph> noted. I'll leave it to him to comment further.
21:14:16 * eglynn notices status is blocked
21:15:27 <eglynn> k, let's punt on the specify-event-api until herndon and jd__ are around to discuss
21:15:30 <herndon_> was blocked waiting on alembic vs. sqlalchemy
21:15:41 <herndon_> sorry, I showed up late
21:15:49 <eglynn> herndon_: np! thanks for the clarification
21:16:01 <herndon_> making good progress now, 4 reviews up.
21:16:08 <eglynn> herndon_: so, were you eyeing up icehouse-1 for this?
21:16:26 <herndon_> that would be great
21:16:55 <eglynn> herndon_: (for context, I don't think there's any particular pressure to beef up i-1, but if the feature ready to fly ...)
21:17:34 <herndon_> let's see where we are by the end of this week? there is still work to do on the ceilometerclient bit, I think
21:17:38 <herndon_> and documentation.
21:17:56 <dhellmann> it's better to target something for a later milestone and finish it early than the other way around
21:17:57 <eglynn> herndon_: cool let's see how the reviews progress on gerrit, make a call by EoW
21:18:06 <eglynn> dhellmann: fair point
21:18:24 <eglynn> moving on ...
21:18:25 <eglynn> #topic blueprints for icehouse
21:18:38 <eglynn> we agreed last week to aim for today for BPs to be filed
21:18:58 <eglynn> I've made progress but I'm not thru all mine quite yet
21:19:07 <eglynn> #link https://blueprints.launchpad.net/ceilometer/icehouse
21:19:38 <eglynn> also nprivalova has started on a rough aggregation & roll up BP
21:19:47 <eglynn> #link https://blueprints.launchpad.net/ceilometer/+spec/aggregation-and-rolling-up
21:20:00 <eglynn> (with some discussion still on-going in the corresponding etherpad)
21:20:12 <ildikov> I also have a BP, which I'll finish till the end of this week
21:20:32 <eglynn> just going to say, ildikov is working on locking down the new API query filtering defintion
21:20:41 <eglynn> ... and she beat me to to it :)
21:20:58 <ildikov> also with some etherpad discussion
21:21:22 <eglynn> ildikov: cool, thanks for that!
21:21:35 <eglynn> for everyone else with an idea discussed at summit that they want to target for icehouse ...
21:21:53 <eglynn> prolly a good idea to try to get something some rough written down in a BP sooner rather than later
21:22:07 <eglynn> before the memory of hong kong fades too much into the mists of beer ...
21:22:19 <eglynn> sorry, the mists of time ;)
21:22:37 <nealph> the mists of jet lag.
21:22:39 <lsmola_> hehe
21:22:44 <eglynn> LOL :)
21:23:05 <ildikov> :D
21:23:17 <eglynn> once BPs all filed, I'd expect jd__ will do a first approval pass
21:23:35 <eglynn> then we'll have to start thinking about target'ing to i2 or i3
21:24:01 <eglynn> we were heavily back-loaded on h3 in the last cycle
21:24:09 <eglynn> ... mostly down to me ;)
21:24:32 <eglynn> so would be good to have a real chunky i2 milestone this time round
21:24:49 <eglynn> anything else on BPs?
21:25:04 <eglynn> k, moving on ...
21:25:15 <eglynn> #topic tempest / integration tests discussion
21:25:28 <eglynn> just following up on last week's discussion
21:25:38 <eglynn> I put some initial thoughts in that etherpad started by nprivalova
21:25:48 <eglynn> #link https://etherpad.openstack.org/p/ceilometer-test-plan
21:26:02 <eglynn> would be great if folks who are interested in contributing to integration testing ...
21:26:20 <eglynn> could stick in a brief description of the areas they're planning to concentrate on
21:26:46 <eglynn> just real rough ideas would be fine
21:26:55 <eglynn> doesn't need to be poetry ;)
21:27:13 <eglynn> just so long as we don't end up duplicating efforts
21:29:30 <eglynn> anyone got anything on tempest they want to raise?
21:30:05 <gordc> i'll just ask since i can't see tempest branch
21:30:20 <eglynn> gordc: which tempest branch?
21:30:51 <gordc> not branch... code i guess... is there a specific folder in tempest which ceilometer tests sits on? just curious so i know what to track in tempest.
21:31:09 * gordc haven't touched tempest in a very long time.
21:31:36 <eglynn> gordc: so there's nothing landed in tempest yet AFAIK, but there were some reviews outstanding
21:31:52 * eglynn doesn't have the links to hand ...
21:32:11 <gordc> ok, i'll dig around. i'll need to look at code eventually anyways.
21:32:23 <dperaza> I have another question
21:32:26 <eglynn> #link https://review.openstack.org/43481
21:32:42 <eglynn> #link https://review.openstack.org/39237
21:32:50 <gordc> eglynn: cool cool. thanks.
21:32:55 <eglynn> dperaza: shoot
21:32:56 <dperaza> are there any plans to run performance specific tests with ceilometer
21:33:19 <nealph> I think this has been raised before...
21:33:31 <dperaza> I was just looking at https://etherpad.openstack.org/p/icehouse-summit-ceilometer-big-data
21:33:31 <nealph> the variations between env are waaay too many.
21:33:38 <litong> @dperaza, I was planning to run some test before the summit, then I was pulled to other things.
21:33:47 <dperaza> did not see tempest action items
21:33:49 <eglynn> dperaza: this will generally happen as part of downstream QE I think
21:33:54 <litong> @dperaze, I think I can do some of these next few weeks.
21:34:22 <eglynn> dperaza: tempest is more suited to a binary gate as opposed to shades-of-grey performance testsuite
21:34:28 <eglynn> (IIUC)
21:34:29 <dperaza> litong is that something that could potentially live in tempest
21:34:39 <dperaza> or are you thinking another home
21:34:48 <litong> @dperaza, I think that is different,
21:34:54 <litong> you were asking performance tests.
21:35:00 <dperaza> right
21:35:18 <dperaza> I do see https://github.com/openstack/tempest/tree/master/tempest/stress
21:35:35 <ildikov> if I remeber well, in HK we discussed to separate it from tempest, I mean the performance tests
21:35:52 <litong> it should, because they serve different purposes.
21:35:53 <dperaza> right if we come up with performance buckets where would they live
21:36:33 <dperaza> are we saying performance would not vote at the gate then
21:36:38 <gordc> ildikov: right. performance tests wouldn't work with jenkins... it's performance fluctuates way too much.
21:36:46 <dperaza> at least initially?
21:36:56 <nealph> dperaza:ildikov:can you help me understand the goal for performance testing
21:36:59 <nealph> ?
21:37:31 <eglynn> so tempest seems to support an @stresstest decorator
21:37:41 <eglynn> #link https://github.com/openstack/tempest/commit/31fe4838
21:37:44 <dperaza> first establish a benchmark and then use as a reference to see how you are improving or regressing
21:37:59 <eglynn> but that seems to be just a discovery mechanism
21:38:07 <ildikov> nealph: in my opinion performance test helps to understand the behavior of the system under different load
21:38:07 <dperaza> interesting eglynn
21:38:28 <eglynn> dperaza: I don't think we could realistically gate on such performance tests
21:38:41 <dperaza> we can start by stressing a single ceilometer node
21:38:44 <eglynn> e.g. reject a patch that slowed things down by some threshold
21:38:48 <dperaza> and see where it brakes
21:38:51 <litong> @eglynn, agreed.
21:39:04 <eglynn> too much variability in the load on the jenkins slaves
21:39:14 <litong> we can run these performance test aside to help find performance problems, then open bugs and fix them.
21:39:22 <eglynn> that's not to say performance testing isn't good
21:39:29 <eglynn> just that we can't gate on it IMO
21:39:41 <eglynn> so tempest is necessarily the correct home for such tests
21:39:47 <eglynn> *isn't
21:39:52 * nealph nods
21:39:53 <dperaza> I do think we can have as a non voting jobin jenkins
21:40:21 <dperaza> that runs daily for example as opposed to on every commit for gatting porposes
21:40:55 <gordc> dperaza: given how instable jenkins is, i'm guessing you'll get a lot of false positives... or a really bad starting baseline.
21:40:56 <eglynn> dperaza: sure, something like the bitrot periodic jobs that run on stable
21:41:06 <DanD> In the testing related session at the summit, didn't someone say there were plans for seperate performance testing systems?
21:41:18 <eglynn> dperaza: but again it would be purely advisory
21:41:33 <eglynn> dperaza: (i.e a hint to get in and start profiling recent changes ...)
21:41:37 <dperaza> DanD: that will work too
21:42:01 <eglynn> DanD: I don't recall hearing that, can you remember the session title?
21:42:06 <gordc> DanD: that'd be interesting.
21:42:33 <dperaza> eglynn: right something to at least tell you the changes that degraded the performance in a period of a few days
21:43:17 <DanD> will have to go back and look at the sessions, but there was one focused on testing. and I thought someone from the test group had said essentialy that performance tests would not work at part of the gate tests but that they were working on someone seperate for this
21:44:05 <eglynn> dperaza: k, in principle it would be good stuff but (a) probably best separated off from Tempest and (b) simple integration tests are our first priority now
21:44:08 <litong> @DanD, @eglynn, I think that the problem is that performance tests normally take long time and we do not want gating job takes long time.
21:44:24 <ildikov> DanD: I meant this discussion, when I mentioned the separation thing, it was on one of the last sessions I think
21:44:25 <dperaza> Performance testing ussually is done also on release or milestone boundries
21:44:25 <litong> I hope that gating job returns in few seconds.
21:44:27 <eglynn> (seeing as that's where we have the most pressing/basic gap in our coverage)
21:45:09 <eglynn> litong: agreed
21:45:15 <dperaza> I think I got the answer to my question, thanks guys
21:45:18 <gordc> litong: we've been waiting 24hrs+ for some of our patches... a few seconds is a dream. lol
21:46:06 <litong> @gordc, yeah, my dream.
21:46:17 <litong> the point is that we do not include the performance test in gating jobs
21:46:37 <gordc> litong: :)
21:46:41 <eglynn> litong: yes, we're ad idem on that point
21:46:57 <eglynn> k, moving on ...
21:47:26 <eglynn> #topic release python-ceilometerclient
21:47:36 <eglynn> I've got a couple of bug fixes in progress
21:47:45 <eglynn> most importantly https://bugs.launchpad.net/python-ceilometerclient/+bug/1253057
21:47:47 <uvirtbot> Launchpad bug 1253057 in python-ceilometerclient "alarm update resets repeat_actions attribute, breaking heat autoscaling" [High,In progress]
21:47:57 <eglynn> once these land, I'd like to cut a 1.0.7 release
21:48:07 <eglynn> anyone got anything else that they'd like to get in before I cut that?
21:48:20 <eglynn> herndon_: you mentioned some events support in the client?
21:48:40 <eglynn> herndon_: note that client releases are cheap and easy
21:48:42 * gordc keeps forgetting to look at python-ceilometerclient stream... will take a look.
21:48:46 <herndon_> let's skip this one
21:48:59 <herndon_> that part still has a couple of holes to patch up before it is reay
21:49:12 <eglynn> herndon_: cool, we can easily spin up a 1.0.8 whenever you're good to go with that
21:49:26 <herndon_> ok, I will keep you posted
21:49:44 <eglynn> gordc: if you're in a reviewing mood ... https://review.openstack.org/57423
21:50:30 <gordc> eglynn: will try to give it a look later tonight.
21:50:40 <eglynn> gordc: thank you sir!
21:50:44 <eglynn> k, we're done on the client methinks ...
21:50:53 <eglynn> #topic open discussion
21:51:38 <gordc> i guess i can ask abour resource stuff here.
21:51:52 <eglynn> gordc: sorry forgot that
21:52:01 <gordc> np. :)
21:52:02 <sandywalsh> o/
21:52:12 <eglynn> #topic what's a resource?
21:52:16 <gordc> in Samples, user and project attributes are pretty self-explanatory and map to keystone concepts pretty well but what about resource_id...
21:52:21 <eglynn> gordc: the floor is your's!
21:52:53 <gordc> i was reviewing  https://review.openstack.org/#/c/56019 and it seems like we don't really have a consistent use for resource_id...
21:53:04 <gordc> or i may have missed the concept as usual :)
21:53:29 <eglynn> gordc: yeah so I think the original assumption was that every resource would have a UUID associated with it
21:53:47 <eglynn> gordc: i.e. the pattern followed by nova, glance, cinder etc.
21:54:07 <eglynn> gordc: but obviously swift doesn't fit into that neat pattern, right?
21:54:21 <notmyname> ?
21:54:38 <gordc> eglynn: right... there are a way meters were resource_id kind of deviates.
21:54:50 <eglynn> notmyname: context is using the container name as the resource ID for swift metering, IIUC
21:55:07 <eglynn> notmyname: see comments on https://review.openstack.org/56019
21:55:55 <eglynn> gordc: is the issue one of uniqueness?
21:55:58 <dragondm> o/ (blast dst :P )
21:56:17 <eglynn> gordc: i.e. that the container name isn't guaranteed unique, or?
21:56:19 <dolphm> (out of scope) why is ceilometer tracking individual resource id's? that seems unnecessarily granular at first glance
21:56:37 <gordc> eglynn: i guess... or maybe just checking that it isn't written somewhere that resource_id in Sample must be <this>
21:56:43 <eglynn> dolphm: we want to be able to query grouped by resource
21:57:12 <sandywalsh> dolphm, for samples it may not be required, but for events we get a lot of use from per-resource tracking
21:57:30 <eglynn> gordc: e.g. must be a stringified UUID?
21:57:51 <eglynn> gordc: I don't think that's a requirement, more just the convention that most services follow
21:57:52 <notmyname> eglynn: I'm not sure I follow.
21:58:48 <eglynn> notmyname: ceilometer samples include a resource ID that's generally a UUID, whereas a patch for swift metering uses the container name for the samples it emits
21:58:54 <dragondm> AFAIK, resource id just needs to be a globally unique id.
21:59:03 <eglynn> dragondm: agreed
21:59:19 <notmyname> eglynn: the (account, container) pair is unique in a swift cluster (eg AUTH_test/images)
21:59:21 <dragondm> uuid works for that, so there won't be collisions.
21:59:29 <gordc> dragondm: that's my original thought as well.
21:59:31 <eglynn> so I think the question is whether the container name reach that bar of unqiueness
21:59:51 <notmyname> and within the context of a single swift cluster, that pair is globally unique
22:00:10 <eglynn> seems that (account, container) would be better from what john is saying above
22:00:18 <dragondm> but if you have multiple regions it does not...
22:00:32 <notmyname> multiple clusters, you mean
22:00:37 <gordc> eglynn: something for the mailinglist? (given the time)
22:00:48 <eglynn> gordc: yep, good call
22:00:50 <dragondm> notmyname: yes.
22:00:58 <eglynn> gordc: can you raise it on the LM?
22:01:05 <gordc> i'll do that.
22:01:27 <eglynn> #action gordc raise resource ID uniqueness question on ML
22:01:41 <notmyname> dragondm: if ceilometer is metering multiple clusters, then it must have some sort of cluster identifier. seems pretty easy to construct a unique resource_id from that
22:01:51 <eglynn> k, I think we're over time here
22:01:59 <eglynn> let's drop the open discussoin
22:01:59 <sandywalsh> sorry for the late arrival (DST) ... re: integration testing, we've started working on storage driver load testing https://github.com/rackerlabs/ceilometer-load-tests (with a CM fork for test branches)
22:02:11 <eglynn> sandywalsh: cool!
22:02:19 <dperaza> dperaza: :-(
22:02:30 <dragondm> Yup. we are also looking at testing an experimental Riak driver.
22:02:33 <dperaza> I gues I could bring up in next meeting
22:02:52 <eglynn> yep, let's defer to next meeting on the ML as we're over time
22:03:01 <dragondm> Cool.
22:03:21 <eglynn> k, thanks for your time folks!
22:03:29 <eglynn> #endmeeting