15:00:39 <gordc> #startmeeting ceilometer
15:00:39 <openstack> Meeting started Thu Aug 21 15:00:39 2014 UTC and is due to finish in 60 minutes.  The chair is gordc. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:41 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:43 <openstack> The meeting name has been set to 'ceilometer'
15:00:46 <gordc> hey folks
15:00:48 <nsaje> o/
15:00:50 <llu-laptop> o/
15:01:07 <kurtrao> o/
15:01:23 <gordc> first meeting i'm running on my end so feel free to call me out if i miss anything
15:01:24 <idegtiarov> o/
15:01:28 <kudva> Hi
15:01:40 <cdent> hola
15:01:48 <kudva> this is my first meeting to discuss a small change blueprint I have submitted
15:02:09 <gordc> kudva: cool cool. welcome
15:02:11 <llu-laptop> kudva: welcome
15:02:24 <nealph> o/
15:02:33 <sileht> o/
15:03:02 <pradk> o/
15:03:09 <gordc> cool. lets assume quorum... i think a lot of people are away this week so this might be quick.
15:03:23 <gordc> #topic Juno-3 planning
15:03:26 * nealph thinks quick is okay... :)
15:03:35 <gordc> https://launchpad.net/ceilometer/+milestone/juno-3
15:03:47 <gordc> here's a list of bps and bugs we're tracking for j-3...
15:03:48 <DinaBelova> o/
15:04:09 <gordc> at last check they've all had code pushed to gerrit so we can all start reviewing
15:04:33 <gordc> nsaje: any concerns on partitioning work?
15:05:00 <nsaje> gordc: no, awaiting further feedback from interested parties that haven't reviewed it yet
15:05:13 <DinaBelova> gordc, nsaje, /me having not possibly the concerns, but just something to say
15:05:24 <gordc> DinaBelova: go for it
15:05:43 <DinaBelova> idegtiarov and I are working on kind of testing of this HA patch to see if it works, etc. and how
15:05:52 <DinaBelova> possibly some other efforts here will be cool as well
15:05:58 <DinaBelova> to have many eyes here
15:06:17 <gordc> DinaBelova: agreed, open invite on reviews for this patch since it's a pretty feature.
15:06:25 <kurtrao> I still wonder if https://review.openstack.org/#/c/104784/ could be landed.  Looks like there's no discussion anymore. Both pros and cons are made clear
15:06:26 <gordc> pretty unique feature
15:06:26 <DinaBelova> gordc, yes, indeed
15:06:34 <gordc> pretty feature too.lol
15:07:12 <gordc> kurtrao: i haven't really tracked that since... i'll take a look.
15:07:26 <gordc> for bps not yet accepted. please continue to work on them.
15:07:58 <gordc> today is unofficially feature proposal freeze day but that's to be discussed with eglynn when he comes back
15:08:21 <gordc> just keep working on it but no guarantees for juno
15:08:29 <kurtrao> gordc: Yes. That's why I want to push it
15:08:50 <gordc> kurtrao: seem to be a lot of -1's... is there a new patch expected?
15:09:25 <DinaBelova> gordc, these -1s are conceptual ones and depend much on personal opinion as far a I remember
15:09:40 <DinaBelova> so we need kind of boss to decide :)
15:10:06 <kurtrao> gordc: I think for the new meter, no more patches. But there's room for improvement, in separate BPs
15:10:14 <kurtrao> gordc: you can refer to the author's last comment
15:10:14 <gordc> DinaBelova:  i see... err. everyone is a boss so more eyes are better. i'll try to take a look later today
15:10:23 <DinaBelova> gordc, cool :)
15:10:39 <gordc> carrying on. llu-laptop  any concerns with your snmp work? or just need reviews on your patches?
15:11:12 <llu-laptop> just needs review
15:11:20 <llu-laptop> one patch has landed, the second needs review
15:11:30 <gordc> llu-laptop: cool cool. let's get that merged.
15:11:52 <llu-laptop> got stuck by a strange jenkins doc error
15:12:06 <gordc> my work regarding normalising data is up... there's discussion on performance with srinivas from HP... we'll post it to patch for people to track.
15:12:22 <gordc> if others could verify there's no performance hit on their end that'd be great.
15:12:58 <gordc> nealph: you still blocked with doc issues or you ok?
15:13:16 <gordc> https://blueprints.launchpad.net/ceilometer/+spec/paas-event-format-for-ceilometer
15:13:22 <nealph> gordc: good to go. Added ildikov to the review for some initial feedback.
15:13:39 <gordc> nealph: cool cool. good to hear.
15:13:40 <nealph> DinaBelova: you probably have some thoughts too...jump in.
15:14:06 <DinaBelova> nealph, /me jumping :)
15:14:21 <nealph> when it's final-ish I'll add add'l folks from the PaaS services to comment too.
15:14:30 <nealph> evangelize, sort of.
15:14:53 <ildikov> gordc: I could build nealph's patch locally, so it should be good in general, I added one tiny comment that was an issue with Sphinx in it
15:15:06 <DinaBelova> nealph, good practice :)
15:15:09 <gordc> ildikov: awesome.
15:15:27 <gordc> so i guess that's it for current bps. regarding juno-3, please help review the bps out there...
15:15:37 <cdent> gordc, grenade stuff
15:15:42 <cdent> I'm pretty much blocked on the grenade/javelin2 stuff.
15:15:43 <gordc> cdent: right
15:15:49 <ildikov> gordc: I hope dhellmann will have some idea about the general doc issue that was also raised by nsaje on the dev ML not so long ago
15:16:06 <gordc> ildikov: i'll bring that up after cdent's part
15:16:27 <cdent> jogo has presented some concerns on whether ceilometer should even be in javelin, on the review: https://review.openstack.org/#/c/102354/
15:16:30 <gordc> cdent: did you get any feedback from joe or sean?
15:16:35 <DinaBelova> ildikov, well, it appeared this night I guess
15:16:50 <jogo> cdent: further discussion is needed
15:16:59 <cdent> and I've got a pending message to the list trying to figure out how to get some info on making the test actually robust: http://lists.openstack.org/pipermail/openstack-dev/2014-August/043372.html
15:17:03 <gordc> cdent: i mean it'd be good to have an upgrade path for ceilometer tested.
15:17:15 <cdent> So until the furuther discussion happens (however it happens), I'm not sure what's up.
15:17:30 <gordc> i think we have one resource here that might be able to take a look at it tomorrow and give feedback.
15:17:31 <jogo> cdent: I haven't had a chance to follow up on that
15:17:46 <gordc> jogo: thanks for the input.
15:18:20 <cdent> I think the main crux of this is how robust we want the testing to be. It can be much simpler than I've done it but then there will be ambiguities.
15:18:44 <jogo> cdent: robust and simple
15:19:51 <jogo> cdent: anyway I will try to follow up offline
15:19:55 <cdent> thanks jogo
15:20:03 <gordc> jogo: cdent: awesome.
15:20:16 <ildikov> DinaBelova: I couldn't check what has changed related to sphinx, or at least what I could check that didn't in the near past
15:20:18 <gordc> cdent: if you're still blocked tomorrow or monday we can try elevating to bigger group.
15:20:26 <cdent> roger gordc
15:20:30 <gordc> anything else for juno-3?
15:20:51 <gordc> if not, please carry on... give some reviews for current bps
15:21:24 <gordc> bps not approved will be tbd for juno but plesae continue working on them with some hope it'll be in juno
15:21:53 <gordc> #topic broken docs job
15:22:09 <gordc> ildikov: you have an update?
15:22:16 <gordc> or did you an DinaBelova hash it already?
15:22:23 <DinaBelova> :)
15:22:55 <ildikov> gordc: not that much actually as I couldn't find a root cause for this issue
15:22:57 <DinaBelova> well, gordc, last time /me was looking on it, it was smth really strange, as ceilometer.agent seems to have no link with wsme
15:23:07 <DinaBelova> and its sphinxext module
15:23:22 <gordc> DinaBelova: yeah, i'm not sure there's a connection there either
15:23:23 <nsaje> does anyone how Sphinx decides which extension to use for a class?
15:23:31 <nsaje> *anyone know
15:23:45 <nsaje> the exception happens in https://github.com/stackforge/wsme/blob/master/wsmeext/sphinxext.py#L348
15:23:50 <gordc> I'll take a look as well... dhellmann ^^ any guesses to sphinx issue?
15:23:58 <nsaje> and ServiceDocumenter does not have a can_document_class() method
15:24:08 <DinaBelova> nsaje, ++
15:24:08 <ildikov> gordc: DinaBelova: the expert of Sphinx in OpenStack is dhellmann, or at least I've never had an issue with Sphinx of which he couldn't know the answer/fix...
15:24:18 <nsaje> on a long shot, it is possible it's getting confused because there's a service.py file in ceilometer.alarm
15:24:25 <DinaBelova> ildikov :)
15:24:37 <ildikov> nsaje: there are several extensions that are used during the different stages of the docco build
15:24:52 <gordc> dhellmann, ryanpetrello would be great resources to ask. we'll try reaching out today. or if someone figures it out let us know in openstack-ceilometer
15:25:06 <gordc> circle back to this later?
15:25:14 <ildikov> nsaje: where the problem occurs now is after the stage where the docstrings are read from the actual code
15:25:40 <ildikov> gordc: yeap, I don't think we could add more to this now
15:25:44 <nsaje> ok, let's take this off-meeting
15:25:49 <gordc> cool cool
15:25:57 <gordc> #topic Tempest status
15:26:03 <gordc> any updates here?
15:26:17 <DinaBelova> gordc, me trying to understand what was that strange error
15:26:21 <DinaBelova> in my reverting change
15:26:37 <gordc> which my patch fixed? or the grenade error you saw after?
15:26:43 <DinaBelova> grenade one
15:26:53 <DinaBelova> gordc, thanks for the your patch, btw
15:27:01 <DinaBelova> it was a little surprising for me :)
15:27:04 <DinaBelova> that error
15:27:12 <gordc> DinaBelova: hmm. i hadn't had a chance to look at it but i'll try later today... it seemed like some keystone stuff
15:27:27 <DinaBelova> gordc, yes, it was - but it's kind of floating bug
15:27:42 <DinaBelova> I found other bugs (not the same) looking like that
15:27:46 <DinaBelova> but not this one
15:27:47 * gordc hoping it's a simple recheck
15:28:00 <gordc> DinaBelova:  i'll try debugging that later today i think.
15:28:11 <DinaBelova> gordc, /me rechecked, waiting in progress
15:28:19 <DinaBelova> gordc, if it'll be needed
15:28:25 <gordc> any other blockers? experimental check working good?
15:28:34 <DinaBelova> gordc, yes, all the time
15:28:55 <gordc> DinaBelova: cool cool. anyone else with tempest issues?
15:29:25 <gordc> #topic TSDaaS/gnocchi status
15:29:39 <gordc> jd__: any update here? or you on vacation?
15:29:47 * DinaBelova wanted to ask if somebody knows about influxdb status
15:29:52 <jd__> I'm not on vacation
15:30:07 <jd__> I'm waiting for a review from eglynn
15:30:11 <jd__> on archiving policy
15:30:18 <jd__> and I should continue to work on that branch then
15:30:27 <ildikov> if jd__ would ask, I'm on an internal workshop, so I couldn't get there this week... :S :(
15:30:38 <jd__> I was about to ask you ildikov  :)
15:30:41 <DinaBelova> Eoghan has proposed his change (and my one with OpenTSDB) for a while ago, but I don't know what's the influxdb status at least
15:31:21 <gordc> jd__: i'm not sure eglynn will get around to it until next week... do you have a link?
15:31:21 <ildikov> DinaBelova: do you mean in Gnocchi or in general?
15:31:36 <DinaBelova> well, in general, as it leads us to the Gnocchi
15:31:41 <jd__> #link https://review.openstack.org/#/q/status:open+project:stackforge/gnocchi+branch:master+topic:jd/archiving-policy,n,z
15:31:42 <DinaBelova> we needed some of their features
15:32:05 <DinaBelova> ildikov, and while they were not ready eglynn's change was not full
15:32:06 <gordc> DinaBelova: i'll follow up with eglynn.. i'm not sure what status is for influx but it's still a driver we're intending on supporting at last check
15:32:16 <jd__> so yeah things have slowed down a little but I hope to continue on that soon
15:32:21 <ildikov> DinaBelova: yeap, I remember that long list from the mid-cycle, I just wasn't 100% sure that you meant that also
15:32:44 <gordc> jd__: awesome. sorry haven't looked at any of the patches.
15:32:48 <DinaBelova> ildikov, yes, indeed
15:32:52 <jd__> np
15:33:37 <gordc> #action eglynn - update on influxdb support in gnocchi
15:33:44 <gordc> i've no idea how to use action ^
15:33:51 <gordc> but good enough
15:34:07 <gordc> any other items regarding gnocchi?
15:34:20 <ildikov> gordc: I think this will be enough for eglynn, we he checks the logs :)
15:34:46 <gordc> ildikov: perfect.
15:34:49 <gordc> moving on.
15:34:55 <gordc> #topic Future of the integrated release ML thread - update?
15:35:04 <gordc> i'm not sure who added this?
15:35:23 <nsaje> probably Eoghan
15:35:29 <gordc> but i've been following the thread and just haven't replied to avoid fanning the flames.
15:35:30 <ildikov> gordc: it can happen that it's a leftover from last week
15:35:58 <nealph> gordc: I think the general comment would be for folks to track the ML if they're interested.
15:36:02 <gordc> but side note. if anyone does have concerns... be it ceilometer, stacktach, monasca, another monitoring tool...
15:36:03 <ildikov> anyway, seems to be an endless thread there
15:36:34 <gordc> feel free to bring it up on openstack-ceilometer or offline... whatever you're conforatable wiht...
15:37:27 <gordc> and onwards we go.
15:37:31 <gordc> #topic Ceilometer Client bug update
15:37:46 <gordc> https://bugs.launchpad.net/python-ceilometerclient/+bug/1351841
15:37:47 <uvirtbot> Launchpad bug 1351841 in python-ceilometerclient "python-ceilometerclient does not works without v3 keystone endpoint" [High,Triaged]
15:37:53 <gordc> fabiog: any update on this?
15:38:06 <fabiog> gordc, I checked the bug using   0.10.2 (Keystone) and 1.0.11 (Ceilometer)
15:38:15 <fabiog> in devstack
15:38:21 <fabiog> and I cannot reproduce the bug
15:38:35 <DinaBelova> fabiog, hehe, funny thing...
15:38:37 <DinaBelova> as usually
15:38:47 <gordc> fabiog: yeah... i think i tried as well and it didn't come up.
15:38:58 <gordc> fabiog: maybe we need to bump our requirements?
15:39:07 <ildikov> fabiog: I guess that's the ultimate latest Devstack then
15:39:23 <nsaje> I'd like to appeal to the core team to show some love to python-ceilometerclient :-)
15:39:29 <fabiog> gordc, a fresh devstack picks those versions
15:39:29 <nsaje> plenty old patches with one +2
15:39:40 <ildikov> fabiog: as a week ago, or smth like, I faced with this issue after a fresh devstack install using the admin port in the auth url
15:40:08 <gordc> nsaje: we don't need your politiking here
15:40:11 <gordc> :)
15:40:15 <cdent> hah!
15:40:17 <nsaje> hehe :)
15:40:22 <fabiog> ildikov: I suspect it was the combination of ceilo client and keystone client that weren't compatible
15:40:33 <fabiog> never the less, if you specify a wrong IP address for auth_url, then Keystone should return that it does not find
15:40:39 <gordc> but yeah, remember: https://review.openstack.org/#/q/status:open+project:openstack/python-ceilometerclient,n,z
15:40:41 <fabiog> and instead it blows
15:40:49 <fabiog> so I filed a bug against Keystone client
15:40:51 <ildikov> nsaje: got it, but the fact is that a day still consists of 24 hours... :(
15:41:02 <gordc> fabiog: so i guess recommendation is to use latest keystoneclient?
15:41:06 <fabiog> #link https://bugs.launchpad.net/python-keystoneclient/+bug/1359412
15:41:07 <uvirtbot> Launchpad bug 1359412 in python-keystoneclient "keystone discovery command throws exception" [Undecided,New]
15:41:16 <fabiog> gordc: yes
15:41:28 <gordc> fabiog: awesome. i noticed you commented on bug as well...
15:41:33 <gordc> thanks for looking at it.
15:41:41 <fabiog> gordc: and make sure that the auth_url is correct
15:41:47 <ildikov> fabiog: so nova and glance worked fine with the url:port como, only ceilometer client was not working with those params
15:41:48 <fabiog> gordc: no problem
15:42:37 <ildikov> gordc: fabiog: I guess it's the third version of the same issue: https://bugs.launchpad.net/ceilometer/+bug/1350533
15:42:39 <uvirtbot> Launchpad bug 1350533 in ceilometer "CommandError: Unable to determine the Keystone version to authenticate with using the given auth_url: http://127.0.0.1:35357/v2.0" [High,Confirmed]
15:42:46 <fabiog> ildikov: I tested glance and nova and they work
15:42:49 <ildikov> or well, the first in time
15:43:54 <fabiog> ildikov: for the related bug on alarms try with a fresh devstack and see if you can replicate the issue
15:43:57 <ildikov> fabiog: in my devstack install that I mentioned they worked for me too, I only had issues with Ceilometer, but I will refresh my devstack env and will check
15:44:26 <gordc> ildikov: cool cool. let us know if it works... or if you need help on that bug.
15:44:45 <ildikov> ok, I will close the bug which has my name on it as we have two other versions for it and I do not have time now to deal with it further and it seems that it would be a duplicated work anyway
15:44:48 <fabiog> ildikov: make sure that you are using the correct IP address for the auth_url for Keystone, it should not be localhost
15:45:18 <gordc> ildikov: sounds good. just mark it as duplicate of whatever bug fabiog has i guess
15:45:27 <ildikov> fabiog: I used the right one
15:45:45 <ildikov> fabiog: I at least triple checked it :)
15:45:57 <gordc> ildikov: one more time!
15:46:07 <fabiog> gordc: please assign it to me
15:46:11 <ildikov> fabiog: but I will install a fresh devstack env and will get back to you with version info, if I still see this error
15:46:39 <fabiog> gordc: if there are duplicates I will be able to quickly see if the problem still persist
15:46:59 <ildikov> fabiog: gordc: so we have three bug reports now
15:47:19 <ildikov> fabiog: gordc: let's pick one that we will update and mark the rest as duplicates
15:47:31 <gordc> ildikov: can you paste them here or mark them as duplicates of the one fabiog is assigned to
15:47:44 <ildikov> fabiog: gordc: I will administer it, no probs, just tell me which one should remain as active
15:48:00 <gordc> fabiog, you can track duplicated bugs on the right side in launchpad
15:48:03 <ildikov> gordc: ok, I will mark them
15:48:14 <gordc> thanks ildiko!
15:48:23 <gordc> ok, i think we're good.
15:48:32 <gordc> #topic Open discussion
15:48:38 <gordc> free for all
15:48:42 <DinaBelova> gordc, I have some interesting thing :)
15:48:43 <kudva> Hi
15:48:58 <DinaBelova> kudva, ok, please be free to be first
15:49:01 <DinaBelova> :)
15:49:02 <kudva> I wanted to discuss a predictive failure alert metric addition
15:49:24 <kudva> We have implemented a 'host predictive failure' which runs as a plugin on the host
15:49:24 <gordc> kudva: new feature?
15:49:34 <kudva> gordc: yes I think so
15:49:39 <gordc> ok.
15:49:47 <nsaje> kudva: tell us more
15:50:04 <kudva> when we detect a failure we add a notification for possible failure so that all vms running on it can be migrated away.
15:50:21 <kudva> we were able to do this with very little code
15:50:24 <kudva> https://blueprints.launchpad.net/ceilometer/+spec/predictive-failure-metric
15:50:27 <kudva> it works for us.
15:50:32 <gordc> kudva: elevator pitch... side note: could you write a detailed item here: https://github.com/openstack/ceilometer-specs
15:50:48 <kudva> We wanted to know how to contribute this to the community code
15:50:52 <kudva> gordc will do
15:50:54 <gordc> kudva: we track our specs there for everyone to review
15:51:19 <gordc> kudva: start with spec write up... it won't get into juno as i'm assuming it's not a short patch.
15:51:19 <kudva> gordc: will do. Didn't know the exact procedure for proposing and acceptance for ceilometer. We have contributed code directly to other projects
15:51:29 <DinaBelova> kudva, yeah, spec will be good
15:51:45 <gordc> kudva: yeah. new process most projects implemented this cycle
15:51:45 <nsaje> kudva: what emits this predictive failure alert?
15:51:52 <kudva> gordc: About 10 lines of code total as you can see in this blueprint
15:51:55 <kudva> https://blueprints.launchpad.net/ceilometer/+spec/predictive-failure-metric
15:51:56 <DinaBelova> kudva, that's new practice for the BPs acception
15:52:20 <kudva> DinaBelova: Ah, ok.  I thought it was a blueprint, so I put the link above for review
15:52:39 <llu-laptop> kudva: https://wiki.openstack.org/wiki/Blueprints
15:52:43 <kudva> gordc: So, I put an entry into ceilometer-specs and then need to have it approved?
15:52:44 <gordc> kudva: err.. yeah so take your spec and match it to the template...
15:52:52 <DinaBelova> kudva, BPs are technically non-delitable in the LP, so we want to have smth deletable in the gerrit
15:53:04 <DinaBelova> if smth will be unacceptable not to create new onew and new onew
15:53:09 <gordc> kudva: if it's really small, it may get into juno? i don't want to give false hope though
15:53:13 <DinaBelova> ones*
15:53:29 <gordc> kudva: yeah, we'd approve specs...but you can post code at the same time
15:53:43 <DinaBelova> gordc, ++
15:53:55 <DinaBelova> for this small change both spec + code will be good
15:53:55 <kudva> gordc: okay will do both together asap
15:54:04 <gordc> kudva: cool cool
15:54:15 <kudva> gordc: thanks!
15:54:16 <DinaBelova> kudva, if you'll do that today, you'll have more chances for juno
15:54:21 <DinaBelova> and me :)
15:54:32 <kudva> DinaBelova: yes, will do it today :)
15:54:38 <DinaBelova> gordc, today I got performance benchmarking results of ceilo on real lab
15:54:47 <ildikov> gordc: FYI, bug administration done, I will update the active one, when I have the fresh env
15:54:48 <gordc> DinaBelova: sweet!
15:54:56 <gordc> ildikov: thanks so much
15:54:57 <DinaBelova> I have some nice stuff for the 2000VMs and 1min pollong :)
15:55:00 <DinaBelova> polling*
15:55:18 <gordc> DinaBelova: are you using ilya's testing script? i couldn't get it working.
15:55:24 <DinaBelova> so I'm going to prepare doc/blogpost to present the results
15:55:36 <cdent> DinaBelova++
15:55:38 <gordc> DinaBelova: was just going ot ask about posting results
15:55:39 <DinaBelova> no-no, we were testing load on the disks, io, etc.
15:55:40 <cdent> eagerto see that
15:55:50 <DinaBelova> so we used rally to perform the load
15:55:55 <gordc> DinaBelova: awesome...ignore the ilya's script part.
15:56:00 <DinaBelova> gordc :)
15:56:24 <DinaBelova> this benchmarking stuff had this disks, CPU, etc. load aim
15:56:45 <DinaBelova> the only thing I can say now that 1sec polling kills nova :)
15:56:51 <DinaBelova> for sure :)
15:56:54 <gordc> DinaBelova: can you post on openstack-ceilometer when it's up? i don't know how else we can share it.
15:57:04 <DinaBelova> gordc, yes, for sure!
15:57:19 <DinaBelova> I hope to have docco tomorrow and blogpost tomorrow or on MOnday
15:57:24 <gordc> DinaBelova: yeah. i think that was known... if i recall there's a bp to address that?
15:57:33 <gordc> DinaBelova: awesome
15:57:48 <DinaBelova> gordc, :)
15:58:12 <gordc> https://review.openstack.org/#/c/101814/ not for juno but something
15:58:26 <DinaBelova> gordc, a-ha
15:58:34 <gordc> DinaBelova:  if you have alternative solution that'd be cool too.
15:58:46 <gordc> but anything else?
15:58:50 <DinaBelova> okay, my LP network connection seems to die :-\
15:58:53 <gordc> anyone?
15:58:55 <DinaBelova> will look asap
15:58:59 <gordc> DinaBelova: cool cool
15:59:01 <DinaBelova> gordc, not from me
15:59:05 <cdent> nor me
15:59:18 <gordc> apologies for lying and saying this would be a short meeting.
15:59:23 <DinaBelova> gordc :D
15:59:27 <nsaje> tsk, tsk
15:59:52 <gordc> everyone happy or happy enough?
15:59:59 <gordc> 3
16:00:00 <gordc> 2
16:00:00 <gordc> 1
16:00:02 <DinaBelova> I guess that's it :)
16:00:03 <nsaje> thanks guys, see you tomorrow!
16:00:05 <gordc> #endmeeting