16:30:36 <sdake> #startmeeting kolla
16:30:37 <openstack> Meeting started Wed Sep  9 16:30:36 2015 UTC and is due to finish in 60 minutes.  The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:30:38 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:30:40 <openstack> The meeting name has been set to 'kolla'
16:30:46 <sdake> #topic rollcall
16:30:47 <jpeeler> hi
16:30:49 <akwasnie> hi
16:30:50 <sdake> yo folks o/
16:31:09 <pbourke> howdy
16:31:20 <rhallisey> hey
16:32:08 <sdake> i'll give it acuple more minutes
16:32:18 <inc0> afternoon
16:32:29 <vbel> hi everyone
16:32:51 <sdake> #topic rc1 planning
16:33:02 <sdake> no anncuncements for this week btw ;)
16:33:44 <[Leeloo]> Hello
16:33:45 <sdake> #link https://launchpad.net/kolla/+milestone/liberty-rc1
16:33:50 <sdake> hey [Leeloo]
16:33:53 <rhallisey> hey
16:34:04 <sdake> you see we have a  whole lot of blueprints
16:34:10 <sdake> the deadline is the 25th
16:34:43 <sdake> on the 18th I'm going to kick out any low/medium blueprints not in good progress to mitaka
16:35:38 <pbourke> sdake: fyi I dont see myself having time to tackle neutron-third-party-plugins
16:35:49 <sdake> ok kicking it now then moment
16:35:57 <pbourke> so unless someone wants to grab it kick it forward I guess
16:36:27 <sdake> are ther eany other blueprnts people want to kick
16:36:43 <pbourke> mickt's ansible-murano is done
16:36:48 <pbourke> will mark as so
16:37:08 <sdake> going to go through the started ones now and get a status update
16:37:23 <sdake> #link https://blueprints.launchpad.net/kolla/+spec/logging-container
16:37:43 <inc0> review is upstream
16:37:47 <sdake> inc0 will this definately make good progress by the 18th?
16:37:54 <sdake> is that all the reviews required?
16:38:26 <inc0> no, we'll need to turn services to use it
16:38:38 <sdake> ok start working on those reviews please
16:38:46 <sdake> don't serialize waiting on this review to finish
16:39:03 <sdake> blocked anywhere or all set?
16:39:17 <inc0> well, I'd just like confirmation if approach itself is correct
16:39:27 <sdake> #link https://blueprints.launchpad.net/kolla/+spec/replace-config-external
16:39:33 <sdake> inc0 i will review your patch after meeting
16:39:39 <sdake> and provide guidance there
16:39:40 <rhallisey> sdake, I'll be ready with that
16:39:51 <sdake> rhallisey it needs to finish sooner rather then later :)
16:39:51 <rhallisey> I'm changing stuff around to work w/ keystone
16:39:54 <sdake> its a big change
16:39:55 <rhallisey> yes
16:40:03 <sdake> i dont want it landing on the 25th
16:40:06 <rhallisey> once I get past keystone it should just be copy paste
16:40:31 <sdake> ok well please if you are blocked reach out to me
16:40:34 <rhallisey> kk
16:40:37 <rhallisey> I'm good atm
16:40:41 <rhallisey> thanks though
16:40:55 <sdake> #link https://blueprints.launchpad.net/kolla/+spec/kolla-ansible-script
16:41:23 <sdake> Hu had indicated he would provide status updates via the mailing list or irc
16:41:26 <sdake> haven't seen any of that
16:41:34 <sdake> may need to reassign this
16:41:55 <sdake> since sam isn't here I won't cover his blueprints
16:42:23 <sdake> mongodb + ceilometer, coolsvap said he is working on this weekend to wrap up
16:42:32 <sdake> in my mind, that leaves us in good shape for a release
16:42:42 <sdake> the one problem spot is cinder
16:42:46 <sdake> which is totally bust
16:43:06 <sdake> anyone brave enough to take on ownership of fixing cinder?
16:43:34 <rhallisey> I will if I finish the refactor in time
16:43:48 <rhallisey> like this week
16:43:52 <pbourke> sdake: can you summarise the problem if possible?
16:44:03 <sdake> lvm creation works, attachin to vm does not work
16:44:20 <sdake> but i did alot of work to mke lvm creation work thta may be incompatible with vm attachment
16:44:45 <sdake> it may be that we kick cinder out of liberty
16:45:08 <sdake> are there other peices of work people intend to bring into rc1?
16:45:53 <sdake> jpeeler since your here, mind giving us a quick update on ironic?
16:45:55 <jpeeler> i have a review up for ironic, but i need to finish it
16:46:06 <jpeeler> it just needs the neutron portion completed
16:46:26 <sdake> do you end up with a nova that can schedule bare metal and vms?
16:47:16 <jpeeler> no, i sent out an email to dev list about it and got no response, so i don't think that is possible without multiple availability zones
16:47:33 <sdake> jpeeler i'll follow up with the ptl and try to  get a response to your email
16:47:52 <sdake> did you tag with [rionic] ?
16:47:58 <jpeeler> i did
16:48:09 <sdake> ok, i think we want to avoid having ironic take over nova
16:48:31 <sdake> sound slike your almost blocked on a response on that email
16:48:57 <jpeeler> it's disabled by default
16:49:09 <sdake> ya but if you want to use it, you can only use ironic and nothing else?
16:49:13 <sdake> that doesn't make any sense ;)
16:49:14 <inc0> yeah, ironic is required to have kolla-based undercloud
16:49:25 <jpeeler> it's main usage case is for undercloud i would suspect
16:49:33 <jpeeler> http://lists.openstack.org/pipermail/openstack-dev/2015-September/073530.html
16:49:44 <sdake> i want to use it to schedule bare metal vms
16:49:56 <sdake> alongside virtua lmachines
16:50:10 <sdake> in other words, as an overcloud ;)
16:50:13 <jpeeler> would be nice, but i don't know how to do that
16:50:24 <sdake> ok well lets get some followup on the mailing list
16:50:27 <sdake> i'll take that action
16:50:37 <sdake> #topic bug squashing
16:50:47 <sdake> we have 40-50 bugs in the bug tracker
16:50:50 <sdake> half of them are unowned
16:50:54 <sdake> i try to fix 1-2 a day
16:51:03 <sdake> but it would be helpful if other folks could take on atleast 1-2 bugs for the release
16:51:38 <sdake> there are about 20 unassigned
16:51:53 <sdake> after rc1, we have rc2 and rc3, with 1 week intervals
16:52:02 <sdake> at rc1, I think I'm branching liberty
16:52:16 <sdake> and then we will backport for the rc2/rc3 tags if bug fixes hit master
16:52:21 <sdake> that way development can continue on in master
16:52:23 <sdake> after rc1
16:52:34 <sdake> we wont be backporting features
16:52:38 <sdake> just bug fixes
16:54:00 <sdake> can i get acks from folks in the meeting that eveyrone understands the plans and schedule?
16:54:12 <pbourke> +1
16:54:23 <harmw> check
16:54:29 <sdake> hey harmw :)
16:54:31 <harmw> :)
16:54:31 <jmccarthy> ack
16:54:33 <jpeeler> sounds good
16:54:50 <sdake> ok thats all I have so :)
16:54:54 <sdake> #topic open discussion
16:55:10 <sdake> i know i've been burning up our agenda the last month and left little time for open discussion
16:55:14 <sdake> so now is 30 mins available :)
16:55:27 <inc0> when do we move to openstack github?
16:55:34 <sdake> 11th (friday)
16:55:40 <inc0> cool
16:56:27 <pbourke> would be interested to know what kind of test scenarios people are running these days?
16:56:31 <pbourke> if any? ;)
16:56:43 <sdake> pbourke i run multinode over and over 3 control 1 network 3 storage 3 compute
16:56:46 <sdake> works like a champ
16:56:50 <sdake> for centos +binary
16:56:53 <pbourke> sdake: nice
16:57:03 <pbourke> sdake: im attempting the same but not yet got to booting a vm
16:57:04 <inc0> I did ubuntu and it works as well
16:57:12 <sdake> i have noticed one problem, if a container is in restart state, the container contents ins't updated, need to file a bug for that
16:57:13 <inc0> building behind proxy works like charm
16:57:21 <rhallisey> sorry dc'd
16:57:23 <harmw> sdake: ?
16:57:38 <pbourke> it would be nice to brainstorm painpoints people are noticing during testing at some stage
16:57:44 <harmw> sounds like you want it to pull again, right?
16:57:51 <sdake> harmw if a container is restarting, it wont be pulled again
16:57:56 <rhallisey> is anyone else running into the haproxy user creation issue?
16:58:04 <pbourke> rhallisey: yes
16:58:05 <sdake> rhallisey could yo ugo into more detail
16:58:07 <rhallisey> it's driving me crazy
16:58:10 <harmw> doesn't restart policy do the pulling sdake ?
16:58:12 <pbourke> rhallisey: only in AIO though?
16:58:15 <sdake> i've had ha proxy creation fail a couple times during my 50-100 deploys
16:58:22 <sdake> harmw no
16:58:23 <rhallisey> pbourke, ya that's why I use
16:58:31 <harmw> ok , please file a bug:)
16:58:42 <pbourke> https://bugs.launchpad.net/kolla/+bug/1491782
16:58:43 <openstack> Launchpad bug 1491782 in kolla "Fail to create haproxy mysql user" [Critical,Triaged]
16:58:44 <sdake> rhallisey how consistent is it for you?
16:58:46 <rhallisey> takes forever to test cause 1/2 the time haproxy blows up
16:58:59 <pbourke> its intermitent it seems
16:59:07 <rhallisey> sdake, It will break like 10 times in a row then stop
16:59:08 <sdake> its only AIO?
16:59:10 <rhallisey> and come back
16:59:13 <rhallisey> ya it's AIO
16:59:31 <inc0> why do we even run haproxy in aio tho?
16:59:40 <sdake> inc0 simpler to setup playbook wise
16:59:54 <vbel> check moving parts :)
16:59:56 <sdake> btw, pleae do not recommend wierd debug sessions with people like turning off haproxy
17:00:08 <sdake> it makes it much harder to diagnose their actual problem ;)
17:00:27 <sdake> speaking of diagnostics, we could really use some more docs ;)
17:00:42 <sdake> people struggle over the meaning of a VIP
17:01:08 <harmw> anything specific sdake (area)?
17:01:08 <sdake> and the ha architecture
17:01:10 <sdake> i get questions on it all the time
17:01:29 <sdake> people want to understand how he haproxy round robin works
17:01:31 <harmw> maybe I can get some time to get into /docs :)
17:01:38 <sdake> i still haven't quite got it pictured in my brain
17:01:44 <inc0> https://blueprints.launchpad.net/kolla/+spec/sanity-check-container this would help in terms of finding which service broke in the first place
17:02:10 <inc0> it's often that for example glance endpoint creatin fails because keystone didnt start correctly
17:02:25 <pbourke> sdake: have you experienced any timing issues wrt to maria coming up before keepalived? So it cant access the vip (multinode)
17:02:31 <inc0> and ansible marked it as success because there is no checking whatsoever
17:03:00 <inc0> pbourke, keepalived runs before maria in ansible
17:03:00 <pbourke> how do people feel about supplying a handful of helper ansible tasks. for resetting the cluster, checking status etc
17:03:09 <jpeeler> i like the idea, but creating separate containers for each services seems like overkill to me
17:03:18 <inc0> and it's rocket fast...if you remember to enable ip_Vs
17:03:19 <pbourke> inc0: we're hitting instances where that's not the case, I'll follow up with a bug
17:03:23 <sdake> pbourke haven't seen that but i have really fast hardware
17:03:29 <pbourke> sdake: kk
17:04:37 <inc0> yeah, we hit all sort of races because ansible just sets up container, not waiting for service to start
17:04:50 <harmw> we could change that
17:04:52 <harmw> probably
17:05:06 <harmw> registering some thing, and check stdout or something
17:05:09 <pbourke> yeah that really sucks
17:07:02 <inc0> let's brainstorm solution maybe?
17:07:24 <pbourke> harmw's suggestion of checking stdout seems ok
17:07:27 <inc0> checking docker logs will be ugly
17:07:35 <harmw> sdake: if the formal meeting is over, perhaps we should just move to #kolla now :)
17:07:46 <sdake> yup sounds good
17:07:48 <sdake> thanks folks :)
17:07:51 <sdake> good session :)
17:07:52 <inc0> thanks
17:07:52 <sdake> #endmeeting