15:00:42 #startmeeting ceilometer 15:00:43 Meeting started Thu Dec 18 15:00:42 2014 UTC and is due to finish in 60 minutes. The chair is eglynn. Information about MeetBot at http://wiki.debian.org/MeetBot. 15:00:44 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 15:00:47 The meeting name has been set to 'ceilometer' 15:00:49 o/ 15:00:50 o/ 15:00:52 o/ 15:00:53 <_elena_> o/ 15:00:53 hello 15:01:02 o/ 15:01:16 o/ 15:01:40 hey y'all 15:01:44 #topic Kilo-1 status 15:01:48 o/ 15:01:50 nearly there! 15:02:01 just waiting on https://review.openstack.org/#/c/142435/ to land 15:02:15 eglynn: it looks fine now 15:02:27 no these py33 failures, yeah 15:02:31 eglynn: hopefully in 7 mins it will be landed :) 15:02:31 ildikov: thanks for the quick fix on the sphinx issue 15:02:43 yeap, then we can tag it and bag it :) 15:03:05 thanks all for the effort in getting the last few patches over the line for kilo-1 15:03:08 eglynn: np, we will revert it after having the new pbr, with that py33 job should be fine 15:03:35 yeap I believe Thierry is chasing a new pbr release 15:04:01 #topic swift middleware disposition 15:04:10 o/ 15:04:18 cdent, gordc - gentlemen, the floor is yours :) 15:04:51 Not much to say other than: we need some push on this to make sure it actually happens. There are a few different issues to resolve. 15:04:57 One is that the the swift middleware is just a pita. 15:05:18 The other is that the way in ceilometer installs into swift's pipeline in devstack is not something that makes Sean et al happy. 15:05:20 o/ 15:05:37 cdent, well, I suppose this does not make anyone happy 15:05:42 There's some debate on the review of the spec (ref in a sec) which probably needs more voice, including any voice from swift people that we can dredge up. 15:05:55 cdent: did you hear back from swift team whether they can own metrics? 15:06:04 #link https://review.openstack.org/#/c/142129/ 15:06:14 but frankly speaking I supposed that the way how it looks now was proposed by one of the swift folks some time ago.. 15:06:40 I spoke with notmyname and he said he was interested in helping, but at the exact moment I was talking to him he was in the middle of million context switches so I gave him the relevant links and left him to it. 15:06:44 so two separate issues in my mind here 15:06:47 ... one is the scalability of emitting a notification for each API call (IIUC) 15:06:51 ... the other is injecting code from ceilometer into swift 15:07:02 presumably we're only talking about addressing the latter issue? 15:07:18 eglynn, yeah 15:07:31 I think the latter issue is the one we hope to solve. If the former is actually an issue (that is, if the notification system can't handle it then that is a major bug) 15:07:51 eglynn: i have no idea about scalability of it... it wouldn't be the first service that emits a notification on every single api call (ie. keystone) 15:08:20 OK, so the ceilometermiddleware repo seems like a sane approach here and follows an established pattern with keystone 15:08:20 eglynn: I guess the latter is the kind of bigger issue, separation would be better 15:08:28 cdent: swift can create a huge amount of notifications 15:08:54 It's a message bus, fabiog, if it can't handle a huge amount of notifications it is _fundamentally_ broken 15:08:59 cdent: you will get a message for every file and folder, think this in a petabyte of data 15:09:33 (anyway, I think that's an issue to address after extracting the code to somewhere else) 15:09:38 not intending to derail the conversation into the scalability side 15:09:47 let's concentrate on the location aspect 15:10:18 so, on the question of the swift team owning their own metrics 15:10:21 cdent: agreed 15:10:35 does that imply that they take over maintaining the swift middleware? 15:11:14 I think that if that can be negotiated, that would be ideal, but we haven't had the conversations to know about the odds on making that happen. I put this on the agenda to see if someone here has good connections there. 15:12:14 #action Since no one is leaping forward I guess that's no so I'll talk to notmyname some more about swift and the middleware 15:12:29 cdent: thanks 15:12:54 I suspect the most acheivable thing would be that we move the middleware into a new ceilometermiddleware repo, and keep the maintainership of the swift m/w 15:13:34 from a historical perspective, I've a vague recollection of there being resistence on the swift side to emitting these notifications directly 15:13:48 ... jd__ might have a clearer recollection of that 15:14:00 * cdent sings: history never repeats... 15:14:05 cdent: LOL :) 15:14:37 eglynn: yes you are right, they do not like oslo that much 15:14:56 from a release/packaging PoV, implications for the seperate ceilometermiddleware repo? 15:15:28 would releases be totally decoupled from the main release cycle? 15:15:35 * cdent hopes so 15:15:38 (like the python-ceilometerclient is) 15:15:39 yeah 15:16:16 so we'd tag on demand, push to pypi, packagers pull the source from tarballs.openstack.org 15:16:30 eglynn: that would make sense IMHO 15:16:42 we'll start a trend! 15:16:44 also no stable/ branch in the ceilo m/w repo? 15:16:51 * cdent nods 15:17:23 so effectively that just pushes the stable-maint burden onto the downstream packagers 15:17:34 but prolly OK if rarely needing backported fixes 15:17:49 (a relatively small amount of relatively stable code) 15:18:11 one would hope that once correct it would never need to change 15:18:14 * cdent laughs at self 15:20:54 OK, so seems like we have an initial approach here (pending some side discussions with the swiftsters) 15:21:50 cdent, gordc: ok to move on? 15:21:59 yep 15:22:00 ✔ 15:22:05 #topic TSDaaS/gnocchi 15:22:44 recently landed https://review.openstack.org/#/q/status:merged+project:stackforge/gnocchi,n,z 15:23:13 notables include ... 15:23:17 sileht's cross-metric aggregation support 15:23:46 amalagon's support for rolling stats in a custom aggregator (\o/!) 15:24:09 jd__'s nice clean up of the metric<->resource association 15:24:32 also ityaptin has resurrected the influx driver, thanks for that! 15:25:02 the ceph driver need some review 15:25:39 sileht: I'm a newbie to ceph, but I'll have a look FWIW 15:26:05 nellysmitt: any progress on finding a solid topic for your internship? 15:26:39 hi eglynn, no ideas yet :( 15:26:41 eglynn, not yet 15:27:11 we've decided to start with change proposed by jd__ and actually debugging devstack much :D 15:27:32 because it simply doesn't want to work properly 15:27:41 OK that sounds like a reasonable way of getting nellysmitt's hands wet 15:27:53 hehe, indeed 15:28:05 nellysmitt is posting to her blog about the issues she finds 15:28:06 DinaBelova: I couldn't setp up devstack yesterday or a day before either... :S 15:28:06 and a little bit of headache also :D 15:28:12 link for the blog? 15:28:13 DinaBelova: "debugging devstack" ... gnochhi-specific issues, or more general devstack weirdness? 15:28:14 http://smittnelly.blogspot.be - lots of them :D 15:28:19 eglynn - both 15:28:23 nellysmitt: business as usual ;) 15:28:23 cool 15:28:37 last mostly workable devstack killed keystone 15:28:41 the one before it - nova 15:28:59 ildikov, hehe 15:29:29 currently screen with devstack got completely locked after ceilometer meter-list 15:29:30 >_< 15:29:36 nellysmitt: yeah, excellent idea to blog often ... the OPW admins tend to use blog activity as a metric on "engagement" 15:30:11 yeah, also helps to remember stuff 15:30:21 nellysmitt: are you using your own h/w baremetal style, or local VMs, or VMs spun up in a public cloud? 15:30:28 DinaBelova: find me later about devstack issues, I love(?) trying to fix them. 15:30:29 (as your dev-env) 15:30:42 nellysmitt: you too 15:30:49 cdent, these issues are not concrete ones 15:30:59 at least yet 15:31:02 ;( 15:31:03 oki, thx, after complete reinstall :D 15:31:04 I get that, those are the fun ones :) 15:31:24 nellysmitt, are you running devstack on local vm? 15:32:23 both, have several vms on mac and on a ubuntu laptop 15:32:47 nellysmitt, so you're running them yourself, not from the public cloud 15:32:48 ok 15:33:04 one fedora vm also 15:33:37 and all of them have different issues somehow 15:33:41 all of them do not want to work, but in different ways :D 15:33:43 yeah 15:33:46 :D 15:33:54 the most funny one was wirth nova availability zones 15:34:00 with* 15:34:08 nellysmitt: would it be a good idea to line up your choice of distro (f20, trusty, precise, whatever) with the one that DinaBelova uses in her dev-env? 15:34:18 nellysmitt: i.e. to reduce the number of variables in play 15:34:28 * DinaBelova is having ubuntu 14.04 15:34:40 afair Nelly had this one as well 15:34:46 a-ha, k 15:34:52 yup 15:34:59 the main one 15:35:01 so-o-o, it's just openstack magic 15:35:06 :) 15:35:28 #action: less magic 15:35:38 OK, so let's all try to get nellysmitt over the hump to a running devstack 15:35:49 eglynn, hehe, cool :) 15:35:51 it'll be nice 15:36:04 after we finally fixed keystone, we lost all devstack vm 15:36:15 ;( 15:36:32 drats 15:37:02 * DinaBelova looking to urban dictionary 15:37:12 a-ha, got it 15:37:29 I haven't spun up a devstack in a while, been mainly doing packstack-mediated installs of the stable/juno packages recently 15:37:40 are other folks seeing similar issues with fresh devstacks? 15:37:53 eglynn: yes, keystone in particular. 15:38:02 * nealph thought it was just me. 15:38:09 eglynn, not me... although my freshest devstack is ~1 week old 15:38:11 eglynn: keystone issue 15:38:15 hehe 15:38:30 smth like too many tries to get tokens? 15:38:35 while the polling? 15:38:52 k, I'm gonna spin up a fresh devstack after the meeting and have a look 15:39:13 i've started one now and I'm having lots of dependency problems :( 15:39:32 cdent: yep, those too. :( I reverted to stable/juno 15:39:34 we've fixed it with os_auth_url=http://127.0.0.1:5000/v2.0 directly - although, it's hacky a bit, this should not be influencing the keystone connection... but that worked 15:40:11 o/ 15:40:32 DinaBelova: are you behind a firewall using proxy? 15:41:16 DinaBelova: ^^^^ in that setup it likely would influence the connection....i.e. no_proxy default. 15:41:36 #action: everybody get devstack working nicely again 15:41:52 (we're not going to solve it right now) 15:41:53 llu-laptop, nope! 15:42:37 cdent: agreed 15:42:39 cdent: +1 15:42:39 right-o, better move on, interesting next topic on the agenda 15:42:43 cdent ++ 15:42:48 #topic quick gabbi intro 15:42:54 ah, that's me 15:43:07 cdent: yeap, the floor is yours sir! 15:43:12 cdent, hehe :) even one bug found and fixed! 15:43:13 I just wanted to let people know I've got a working spike of the harness that will drive: https://github.com/openstack/ceilometer-specs/blob/master/specs/kilo/declarative-http-tests.rst 15:43:35 I ended up calling it gabbi (because it talks) but it ought to be a good backronym at some point: https://github.com/cdent/gabbi 15:43:47 There's a review up for its integration with ceilo: https://review.openstack.org/#/c/142594/ 15:44:01 a-ha, I was wondering about the name :) 15:44:25 There's no intention that that be the final integration, but rather a place where we can play around with it and figure out what's missing or broken. I couldn't figure out a better way to share a branch 15:44:51 It's already found one tiny buglet in the API, which I think we probalby already knew about: / reference /v2 but /v2 is a 404 15:45:00 I gave it a quick run earlier and tripped over http://paste.openstack.org/show/152780 15:45:06 cdent: silly question, but it assumes a working tox setup? 15:45:15 (may well be specific to my dev-env, or a dumb setup error on my part) 15:45:40 * nealph has horrible issues with tox and proxies 15:45:55 nealph: It doesn't have to use tox, it's just I'm trying to make the formal testing folk happy by using the usual harness and tools 15:46:17 if you look at the tox.ini you can just do what it says: install the requirements and run the subunit.run or testtools.run command 15:46:39 sweet. 15:46:41 eglynn: remind me about that later, it looks like the wrong thing is being trapped 15:46:52 cdent: coolness, thanks! 15:47:17 Anyway: I just wanted to give people the heads up because I figure this is going to be pretty useful but if want to get it right, early, then we'll need to mess around with it. 15:47:35 eglynn: I would like to give an update on the mid-sprint org details before the end of meeting, please 15:47:42 nealph: proxychains could help in setting tox env behind proxy 15:48:02 * cdent has said what he wanted to say 15:48:12 so the next step presumably is to start iterating on the ceilometer/gabbi/gabbits/basic.yaml to beef up the range of APIs covered? 15:48:24 also would it make sense to experiment with testing the gnocchi API in this way? 15:48:25 eglynn: yes 15:48:28 and yes 15:48:46 (might be a good way for nellysmitt to start learing the gnocchi API, perhaps?) 15:48:49 in fact I suspect that gnocchi will be a good deal easier to test since it's API is much more two way 15:48:55 its 15:49:04 eglynn, after devstack will start working :D 15:49:09 coolness :) 15:49:19 (I just got one working by disabling horizon) 15:49:45 a-ha, yeah, I usually do that by default anyway 15:50:52 * cdent looks to fabiog 15:51:00 cdent: cool, thanks for the quick intro, sounds like this is something that could be adopted in multiple projects 15:51:02 cdent: those were the dependency issues, yes? 15:51:10 everyone is having own magical ways of devstack repair 15:51:13 #topic update on the mid-sprint org details 15:51:14 nealph: yes eglynn: yes 15:51:27 so I update the page with the agenda https://etherpad.openstack.org/p/galway-jan-2014-ceilometer-sprint 15:51:45 adding logistics and an attendance list. For now I confirmed me and eglynn 15:52:01 please add your name as soon as you are confirmed to come 15:52:15 we also have a block reservation at the Radisson 15:52:54 so if you are interested in staying there at the HP rate, please let me know and I will put your name to one of the block reservations and give me check in/out dates 15:53:00 fabiog: thanks for the travel details :) 15:53:11 fabiog: excellent, €92 is a good rate for Galway, thanks for making that available 15:53:29 we are going to have a dinner that is already being booked for Thur night, 15:53:45 am I envious or jealous, I forget which is which? 15:53:47 fabiog: Wednesday? 15:54:02 (according to the etherpad) 15:54:02 yes sorry 15:54:06 cool :) 15:54:31 so please do add your name to the list if and when you get confirmation from bosses and bean-counters etc. 15:54:51 I can also arrange taxis for people staying at the Radisson ... I will take one anyway 15:55:07 cool, thanks! 15:55:21 so, I really hope to see you there. I guess we won't have a meeting next week ... are we? 15:55:22 one other bit of housekeeping to deal with ... 15:55:27 #topic meeting cancellation over the holidays 15:56:11 I'm guessing the way Christmas day and New Year's day fall this year means there won't be much demand for a meeting for the next 2 Thursdays 15:56:14 amiright? 15:56:28 * jd__ nods 15:56:36 you are correct sir 15:56:37 eglynn: youareright :) 15:56:45 + 15:56:47 * eglynn will be busily over-cooking a turkey this time next week :) 15:56:48 so the next one will jan 8? 15:56:51 +1 15:57:12 fabiog: yeap, that's what I hearing 15:57:37 ok, cool :) 15:57:41 so please try to finalize you trip by then. I would appreciate that 15:57:42 #topic open discussion 15:58:21 I would like to wish Merry Christmas and Happy New Year to all of you. It has been a fun year working together 15:58:31 +1 15:58:39 yeah, indeed :) 15:58:41 I would like to wish the same! :) 15:59:02 productive year! tnx to everyone! :) 15:59:06 it's strange russians are having x-mas much later than you folks :D 15:59:13 And Marry Christmas from me!!! 15:59:26 DinaBelova: so you can celebrate it twice ;-) 15:59:28 yes. absolutely, have a great holiday everyone! :) 15:59:38 idegtiarov ;) 15:59:49 DinaBelova: what date is the Russian celebration? 15:59:54 7th Jan 16:00:04 a-ha, wow! 16:00:08 yep! we also 16:00:18 and New Year as usual for all of you :) 16:00:53 I think that's a wrap ... thanks folks for your time today and all the efforts during tha pst year! 16:00:56 happy holidays~ 16:00:57 *past 16:01:07 #endmeeting ceilometer