15:00:19 <jd__> #startmeeting ceilometer
15:00:20 <openstack> Meeting started Thu Sep 19 15:00:19 2013 UTC and is due to finish in 60 minutes.  The chair is jd__. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:22 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:24 <openstack> The meeting name has been set to 'ceilometer'
15:00:34 <gordc1> o/
15:00:37 <jd__> hello everybody
15:00:44 <lexx> o/
15:00:53 <eglynn> o/
15:01:04 <thomasm> o/
15:01:05 <sileht> o/
15:01:25 <lsmola> hello
15:02:08 <sandywalsh> o/
15:02:16 <jd__> #topic Review Havana RC1 milestone
15:02:24 <jd__> #link https://launchpad.net/ceilometer/+milestone/havana-rc1
15:02:59 <jd__> so we still got our blueprints unmerged, that's bad
15:03:08 <dragondm> o/
15:03:16 <jd__> though I think we're coming to an end fortuantely
15:03:16 * eglynn busily working thru' review comments on https://review.openstack.org/44751
15:03:31 <eglynn> ... will have another iteration up shortly
15:03:31 <jd__> the release manager virtual deadline is tomorrow
15:03:53 <jd__> so nose on the grindstone like eglynn would say
15:04:14 <eglynn> ... grinding like crazy :)
15:04:25 <sileht> same for me :)
15:04:51 <jd__> I also went through our bug list to mark the ones important for RC1
15:05:02 <jd__> so the list on the page should be up to date
15:05:17 <jd__> don't forget to add this as a milestone if you want this to be fixed for rc1 when reporting new bugs
15:05:30 <jd__> and also feel free to review the list yourself if you think I missed something important :)
15:05:48 <jd__> there's also a couple of bugs that you can see at https://launchpad.net/ceilometer/+milestone/havana-rc1 that don't have assignees
15:05:48 <eglynn> I need to chase dprince on https://bugs.launchpad.net/ceilometer/+bug/1224666
15:05:49 <uvirtbot> Launchpad bug 1224666 in ceilometer "alarm_history_project_fkey Constraint fails on el6 MySQL" [High,Triaged]
15:06:07 <jd__> so volunteers welcome
15:06:08 <eglynn> (seems to me that issue should have gone away since https://review.openstack.org/45306 landed)
15:06:26 <jd__> eglynn: ack, I guess you can test it?
15:06:36 <eglynn> jd__: yep, I've done so
15:06:37 <jd__> end of my speech for RC1, questions?
15:06:48 <eglynn> (will confirm with dprince in any case ...)
15:08:28 <jd__> #topic Release python-ceilometerclient?
15:08:37 <jd__> I don't think we need that yet
15:08:49 <jd__> we may want to release once sileht changes on alarming got merged
15:08:55 <jd__> s/got/get/
15:09:04 <eglynn> yep wait on https://review.openstack.org/#/c/46707/
15:09:14 <jd__> exactly
15:09:16 <sileht> ya
15:10:05 <jd__> #topic Open discussion
15:10:23 <jd__> I'm out of topic :)
15:10:33 <sileht> wsme support return code \o/
15:10:42 <thomasm> lol
15:10:47 <dragondm> heh
15:10:53 <eglynn> 204s, 201s, the joy of it! :)
15:11:27 <thomasm> Looks like the tests got resolved. Thanks for that. =]
15:11:39 <nadya> guys, do you have any docs related to Autoscaling integration with heat?
15:11:59 <lsmola> nadya, would like to read that also
15:12:09 <eglynn> not that I'm aware of, asalkeld may have something on the wiki
15:12:21 <jd__> I would hope it doesn't need any documentation ;)
15:12:23 <eglynn> (IIRc he's back from vacation on Monday)
15:12:48 <nadya> all the changes will be in their part?
15:13:05 <sileht> nadya, a example for heat template: https://github.com/openstack/heat-templates/blob/master/cfn/F17/AutoScalingCeilometer.yaml
15:13:25 <nadya> sileht, thanks!
15:13:36 <sileht> nadya, all you needs have been merged
15:13:37 <sandywalsh> just in case you missed the email, we're pushing hard for more notification support in openstack projects.
15:13:45 <sandywalsh> #link http://www.sandywalsh.com/2013/09/notification-usage-in-openstack-report.html
15:14:01 <sandywalsh> beat the drum "more events!" :)
15:14:19 <jd__> btw, PTL elections will start tomorrow https://wiki.openstack.org/wiki/PTL_Elections_Fall_2013 and I think I heard ttx needed officials to help, if some of you are interested
15:14:21 <sileht> sandywalsh, cool reports
15:14:42 <lexx> hi now i do IPMI implementation for monitoring physical devices. What data you prefer to collect there?
15:15:29 <jd__> lexx: everything?
15:15:48 <eglynn> lexx: do you mean monitoring over IPMI itself?
15:15:55 <gordc> jd__: agreed.
15:16:21 <eglynn> lexx: (... does that expose everything we currently gather from the hypervisor in the libvirt case?)
15:16:48 <jd__> eglynn: I think it's remotely read, no? lexx?
15:17:01 <lexx> yes
15:17:11 <gordc> lexx: are you building off this bp: https://blueprints.launchpad.net/ceilometer/+spec/ipmi-inspector-for-monitoring-physical-devices ?
15:17:24 <lexx> yes
15:17:33 <gordc> lexx: cool cool
15:17:38 * eglynn reads ...
15:17:49 <lexx> we already have snmp implementation
15:18:34 <lexx> but i should know which data should collected by IPMI implementation
15:19:37 <dragondm> whatever you can get?
15:19:54 <eglynn> yeah that was kinda my original question
15:19:55 <lsmola> lexx, what are the advantages over snmp Hardware agent?
15:20:08 <eglynn> i.e. what is available for collection over IPMI?
15:20:37 <lexx> speed Fan, voltage, temperature
15:20:38 <sandywalsh> https://pypi.python.org/pypi/pyipmi/0.8.0
15:20:52 <sandywalsh> power on/off would be great (especially for heat)
15:21:12 <lsmola> lexx, oh cool
15:21:19 <sandywalsh> bios change for security
15:21:52 <sandywalsh> os failure
15:22:12 <sandywalsh> sounds like a combination of events and meters
15:22:31 <nadya> one more question from me. I sae several hbase-related bps on launchpad. they are approved but not started since May. Is there any activity about that anythere :)?
15:22:51 <lsmola> sandywalsh, that is great, that is exactly what we need in Tuskar ans TripleO
15:24:01 <lsmola> sandywalsh, also health monitor of host in Horizon will be great
15:24:12 <gordc> nadya: are you volunteering? :)
15:24:31 <nadya> gordc, thinking about this :)
15:25:00 <thomasm> nadya, I just implemented duplicating resource metadata in the meter collection from the HBase driver. That's the only activity on it that I'm aware of.
15:25:07 <gordc> nadya: i'd say hbase is probably the least actively supported of the backends we have... i don't think anyone will fight over it with you.
15:25:18 <lexx> sandywalsh, I didn't find any documentation for pypmi. I prefer to use a ipmitool in subprocess
15:25:44 <thomasm> nadya, But, I'm not an HBase maintainer, just wanted to address the bug I was working on for every driver.
15:26:07 <sandywalsh> lsmola, yeah, I can see ironic using that too
15:26:13 <nadya> gordc, ok, I see. Thank you for answer
15:26:44 <sandywalsh> lexx, there seems to be several implementations, which is good
15:26:54 <nadya> thomasm, thanks for info!
15:26:59 <thomasm> nadya, sure thing
15:27:41 <jd__> nadya: definitely cool if you can take over HBase maintenance :)
15:27:42 <lsmola> sandywalsh, cool, I think tripleo is still using Nova-baremetal, but we should switch to to Ironic soon
15:28:20 <lsmola> lexx, putting you as a depency for Tuskar integration
15:28:59 <lsmola> lexx, will put it also in Horizon, once i figure out where :-) though it will be something with statistics of Hosts
15:29:50 <nadya> jd__, I love HBase :) But I'm new in ceilometer so I need to dive into it first
15:30:00 <jd__> nadya: sure :)
15:30:58 <lexx> lsmola, no in not. But statistics of hosts already collect by SNMP
15:32:13 <lsmola> lexx, if you mean the Hardware agent, it is collecting only few of them
15:32:51 <lexx> yes
15:32:58 <lsmola> lexx, the one named by you and sandywalsh are not there as far a I know
15:33:13 <lsmola> lexx, would be cool to add it
15:33:40 <lexx> ok
15:33:41 <lsmola> lexx, for tuskar and a tripleo, we are monitoring only the hardware, so this will be very welcome
15:34:51 <lsmola> lexx, is the some way to monitor e.g. disk health?
15:35:14 <lsmola> lexx, in general, we are finding a way how to measure the health of the Hardware
15:35:44 <lsmola> lexx, so any failures will be usable i guess
15:36:42 <lexx> okey
15:37:04 <lsmola> lexx, cool
15:37:22 <lsmola> lexx, thank you very much
15:37:32 <lexx> no problem)
15:38:15 <jd__> #endmeeting