16:29:08 #startmeeting kolla 16:29:08 Meeting started Wed Nov 25 16:29:08 2015 UTC and is due to finish in 60 minutes. The chair is SamYaple. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:29:09 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:29:11 The meeting name has been set to 'kolla' 16:29:21 #topic rollcalll 16:29:41 o/ 16:29:43 still a minute early, but here 16:29:56 o/ 16:29:57 hi 16:30:16 my clock might me a minute off 16:30:25 o/ 16:30:26 im going to give a few for the checkin 16:30:40 yeah, it doesn't seem we have quorum 16:31:04 I will use this precious moment to announce: oslo.log is broken 16:31:24 inc0: just hold up, give it til 1635 16:31:47 hi 16:32:06 hey :) 16:32:07 o/ 16:33:35 #topic announcements 16:33:54 nihilifer is now officially a core! w00t 16:34:10 its been a 'known' thing but he just got added to the group 16:34:12 so congrats 16:34:14 congrats once again :) 16:34:25 Michal have became most popular first name in core team 16:34:35 i vote we call him Michal2 in person 16:34:46 thats all the announcements I know of, anyon have anything to add 16:34:51 nada 16:34:52 o/ 16:35:02 thanks again for nominating me 16:35:26 #topic status of oslo.log 16:35:30 take it away inc0 16:36:09 ok, well, akwasnie was battling logging, I'm battling logging and OSAD guys said they have lots of problems with it as well 16:36:26 the OSA guys arent using syslog though 16:36:38 the write to file then read with external syslog 16:36:39 yeah, because it was broken 16:36:53 they experienced horrible wait conditions as they say 16:37:20 details? 16:38:50 inc0: you still there? 16:38:55 odyssey4me> inc0 some services don't use the library in the right way - often causing the service to hang if, for instance, you restart rsyslog 16:39:12 inc0: no thats a bug in rsyslog 16:39:25 thats been fixed in upstream things 16:39:40 ok, I'll tell them then 16:39:42 we should probably check our versions are good 16:39:54 well there is _a_ bug that will do that should i say 16:40:13 they may well have found a different one, but the bug i know about blocks on rsyslog restart 16:40:19 and that probably hangs the service 16:40:35 but we havent experinced that, what we experince is logging doesnt work with errors 16:41:03 any progress on that front? 16:41:34 I thought akwasnie fixed that recently 16:41:55 i just repaied keystone errors 16:42:08 can the same fix be applied across the rest? 16:42:13 no thats apache 16:42:14 but tracebacks may not work becouse they are handled as errors 16:42:22 the openstack services that use oslo.log still push stack traces and errors to stderr 16:42:36 and the way to log tracebacks is to handle them as exceptions in logging python module 16:42:43 and then, they can be logged to rsyslog 16:43:11 for the openstack servies, there appears to be an option to do just that 16:43:15 we arent enabling it 16:43:33 SamYaple, which one? 16:43:39 use_stderr 16:44:15 can it be used globally or with every logging call? 16:44:34 its in the conf for the service 16:44:43 like 'verbose=True' 'stderr=False' 16:44:52 or use_stderr=False 16:45:00 ok 16:45:03 I'll play around it 16:45:07 im not sure thats going to do what we want, but im not sure I fully understand the issue 16:45:14 inc0 submitted a bug 16:45:17 https://bugs.launchpad.net/kolla/+bug/1519381 16:45:17 Launchpad bug 1519381 in kolla "Tracebacks aren't logged to rsyslog" [Critical,In progress] - Assigned to Michał Jastrzębski (inc007) 16:45:22 because I can't get logging.conf to work 16:45:38 neither did i 16:45:59 akwasnie, but you didn't try use_stderr=False right? 16:46:17 yes, i missed this one 16:47:00 #topic liberty 1.1 release 16:47:17 so we are doing very well with the bugs 16:47:20 #link https://bugs.launchpad.net/kolla/liberty/+bugs 16:47:37 of those 3, 2 are in-progress upstream 16:47:48 if you know or additional bugs that affect liberty, please tag them or file them 16:48:11 #link https://launchpad.net/kolla/+milestone/mitaka-1 16:48:22 the Essential blueprints that are not implemented are still outstanding 16:49:04 I may not be able to get to the version tagging one 16:49:07 I've done some reading on mine (keepalived). Seems straightfoward, i'm just working on getting a multinode dev setup so I can test it. 16:49:31 britthouser: no problem! if you need any help hit up the channel. 16:49:43 Will do, probably first of next week. 16:50:03 i havent seen sdake around alot, but he has other more important things. would someone like to take over the ovs data container work? 16:50:07 it should be a simple fix 16:50:28 the fix should actually be entirely ansible code 16:51:12 SamYaple, it's just using the existing data container, but mounting in the correct dirs right? 16:51:20 sounds fine 16:51:30 I can take it if needed 16:51:32 rhallisey: correct 16:51:39 yea it should be a quick one 16:51:49 we just want the ovs db in its own volume 16:52:05 sure 16:52:31 #topic Kolla-rolled docker module 16:52:38 #link https://review.openstack.org/#/c/248812/ 16:52:47 Some background on this: 16:53:16 Ansible folks currently control the docker module that we use. This has lead to us waiting on them and forcing new versions of ansible on the users _just_ to get a simple bug fix 16:53:31 for example, the 1.8.2 docker version pin is because of ansible 16:53:54 They have stated they will not fix the v1 registry pulling for the docker module 16:53:59 SamYaple, this is functional? 16:54:04 inc0: no, its WIP 16:54:07 ok 16:54:15 right now it just returns a list of all running containers 16:54:38 So the idea with this module is to build our own so we can be in control of bug fixes 16:54:58 but that means we'll also fix the bugs ourselves - more code to maintain 16:55:02 this module is so critical to us we can't wait for a bugfix or new feature upstream and _force_ ansible upgrades locally 16:55:28 agreed, but right now i submit bug fixes to ansible upstream 16:55:34 that would just be internalized 16:55:42 can't we pick up the docker module from ansible repo instead of writing it from scratch? 16:55:57 additionally we wont be as complicated as the ansible module because we dont need to support all versions of docker and all versions of docker-py 16:56:02 nihilifer: licensing 16:56:09 nihilifer: its gplv3 so we cant fork it 16:56:24 aww, ok 16:56:58 the good news about this is we will not be forced to upgrade to ansible 2.0 if we continue down this "build-your-own-module" front 16:57:16 we will be running ansible 2.0 in the kolla-ansible container, but not require it on the host 16:57:36 any thoughts on doing this? 16:57:45 (creating our own module) 16:58:16 how long will we have to do this? 16:58:28 in other words is there an end for this 16:58:31 I was thinking along the same lines 16:58:50 because in a sense it s worries me 16:58:52 If there is an end, we should still be submitting fixes to ansible's module too, right? 16:58:58 nope. we will carry this for the life of the project, otherwise we are in the same boat again, waiting on ansible to improve kolla 16:59:03 is it technically possible to use different ansible modules under some condition? it would be nice to use upstream module when running ansible 2.0 in future 16:59:28 nihilifer: what do you mean? 16:59:51 say the ansible community were to listen to us more, would it solve our problem? 16:59:55 no 17:00:10 that would still me a forced ansible upgrade to the latest version to improve kolla 17:00:18 they explicitly don't want to fix it in 1.x 17:00:20 SamYaple: i mean whether it's possible to call different modules with the same arguments, and pick up module to run by some if conditional 17:00:42 to be able to use upstream docker module if someone will run ansible >=2.0 17:00:48 nihilifer: if the arguemntes match up, you could change the name of the module in ansible, yea 17:00:51 oh no 17:00:54 could we have kolla-ansible 2.0 17:01:01 -2 on that 17:01:05 and a kolla-anisble 1.x 17:01:07 i dont want to maintain twice our code base 17:01:21 ill maintain this module alone before we duplicate our code base like that 17:01:35 well the opposing view is the maintain a module 17:01:54 SamYaple, maintaining "alone" is not too scallable, we need full community support on this 17:02:06 i know 17:02:35 so the issue is we are hamstrung right now 17:02:44 i don't like the idea of maintaining our own module forever 17:02:48 SamYaple, how different is using 2.0 vs what we have now 17:02:52 question tho, did anyone check if there is ready upstream project that is alternative to core ansible? 17:02:56 rhallisey: a good bit different 17:03:09 its been over a month since the 1.8.2 bug came out, and still no new ansible version 17:03:16 we can;t fix this even if we wanted to 17:03:26 this is a critical peice of our project 17:03:39 can we at least agree with ansible people on API? so we can create something more-or-less compatible with what they will have on ansible v2? 17:03:48 additionally, the v2 registry can take hours and hours to push to, but they wont support the v1 registry 17:03:51 inc0: +1 17:03:58 at lease we should be 100% compatible 17:04:03 in arguments we take in module 17:04:08 and in behavior 17:04:16 i disagree there 17:04:18 at least* 17:04:23 yeah, thing is, we will want to switch to upstream at some point 17:04:29 i also disagree 17:04:38 inc0, that's what I was thinking 17:04:41 we dont otherwise we will be in the same boat 17:04:45 maybe in 2.0 that becomes a reality 17:04:53 SamYaple probably doesn't want to switch to upstream never ever 17:04:59 SamYaple, ansible may change policy, I'm -2 on burning bridges 17:05:08 its not a policy thing 17:05:26 SamYaple: what if in future there will be some bugs resolved in upstream ansible? 17:05:26 listen to get this fixes we _must_ upgrade ansible version 17:05:29 this is true forever 17:05:30 if we keep more or less compatible, when we decide we're ready to switch back to ansible core modules 17:05:33 we can do it 17:05:38 will you watch are their activities to keep in sync? 17:05:41 the bugs are curerently solved in ansible upstream 17:05:58 another option would be chasing trunk of ansible 17:05:59 all their* 17:06:07 we do that with openstack 17:06:17 inc0: no we can't chase ansible trunk 17:06:20 its always broken 17:06:34 i dont want to switch to ansible 2.0 out of the gate because its so busted (its a full rewrite) 17:06:39 but we _must_ 17:06:43 it seems to me it's broken now even tho it's pretty released 17:06:44 unless we write our own module 17:07:28 kolla-salt then? 17:07:34 I just don't want to detach here from ansible.. 17:07:38 inc0, lols 17:07:45 ill speak for sdake since he isnt here, he is 100% behind this and pushing me to do this 17:08:01 this means our own module? 17:08:04 yes 17:08:16 I'm ok with our own module, I'm not ok with going oposite to ansible 17:08:23 define opposite? 17:08:25 let's keep API at least simirar 17:08:27 similar 17:08:33 arguments and basic behaviours 17:08:35 In the module we right, will the APIs be compatible with upstream? 17:08:36 i'm ok with own module as temp solution 17:08:38 not forever 17:08:40 are you just talking about the args? 17:08:46 meaning we could drop theirs in, if it ever starts working? 17:08:48 yeah, args 17:08:50 names 17:09:10 that sort of stuff, so we can always come back to ansible with as-little-amount-of-work-as-possible 17:09:15 i dont think people realize this inst about a short term solution, but a long term one 17:09:22 lets talk about it like a long term one please 17:09:33 long term I don't like it 17:09:39 SamYaple, I'm not ready to commit supporting thing like that forever 17:09:53 then suggest a solution please to the issue presented 17:09:54 I'd rather switch ansible to something more managable 17:10:04 because we'll hit more problems at some point 17:10:12 keeping the names/args teh same seems like a good middle ground. 17:10:24 not with docker but with core ansible, or other module we happen to use 17:10:38 we won't rewrite ansible then 17:10:42 inc0, interesting point, but I'd like to at least try and work this out with them 17:10:56 rhallisey: that won't solve anything. these bugs are fixed upstream 17:11:01 i fixed the 1.8.2+ one 17:11:12 then maybe help them keep trunk not-broken 17:11:15 the problem is we are forced to upgrade just for that one little docker-py change 17:11:25 inc0: trunk broke because docker upgraded 17:11:28 it will happen again 17:11:49 it will happen for us as well 17:11:56 sure, but we can fix it 17:12:07 its been a month, still no consumable fix from ansible 17:12:08 we can fix it here or in ansible repo 17:12:17 i fixed it in the ansible repo a month ago 17:12:22 either way works if we won't be bound to ansible release schedule 17:12:22 still not consumable by us 17:12:26 and chase trunk 17:12:27 So if we write our own, but keep it "compatible" with the upstream module, then at any time we could switch in the upstream version. that gives us teh possible of getting "out" down the road. 17:12:43 britthouser: the arg names dont matter to me 17:12:50 sure why not 17:13:08 but the solution to the problems wont ever change with ansible upstream 17:13:21 they arent doing anything _wrong_ they are just thier own project 17:13:45 my suggestion - we start working on local module kept as much compatible with latest ansible as possible 17:14:05 SamYaple, we can hit that problem with EVERY dependency we have, including openstack 17:14:07 arg names can be the same, doesnt matter to me 17:14:18 but openstack will affect all of openstack 17:14:21 its not the same problem 17:14:32 it _will_ get fixed and released in a timely manner 17:14:38 ansible will affect everyone who uses ansible+docker, and that's not just us 17:14:40 thats not the case with ansible (its been a month) 17:14:50 chasing trunk then 17:14:57 we will not be chasing trunk ever 17:15:00 why? 17:15:16 why what? 17:15:17 I think we have a proposal everyone can agree on? 17:15:33 write our own, keep it compatible. 17:15:34 let's start with our own but compatible, that's my suggesiton 17:15:40 britthouser: agreed, this still needs more discussion, but we can do so later 17:15:41 sounds like local module that compatible 17:15:47 yup 17:15:58 and we can discuss it on midcycle, its super important 17:16:04 SamYaple, I'm going to look at 2.0 anisble-docker here 17:16:08 that's a good idea inc0 17:16:19 inc0, agreed 17:16:24 rhallisey: the 2.0 docker module? 17:16:54 actually we can discuss this more in the channel, we are running a it out of time. 17:16:58 #topic open discussion 17:16:59 SamYaple, or whatever the module causing the issue. 17:17:02 floor is open 17:17:05 docker-py 17:17:21 any more details about mid-cycle? 17:17:32 SamYaple, do you have a sign up 17:17:41 rhallisey: no sdake was putting that up 17:17:49 I will get with him if he has not yet 17:18:01 do we have dates? want to block off my calendar 17:18:05 britthouser: not anything past febuary-ish in Greenville SC 17:18:15 ill talk with sdake and get this going 17:18:22 * britthouser blocks the entire month of february 17:18:25 =P 17:18:41 do it! 17:18:57 spend rest of Febuary on reviews 17:18:57 SamYaple: sorry i was lurking in the channel. RE Ansible 2.0 the OSAD team wants to use it in the future however we'll likely wait a few releases because out intial testing reviealed quite a bit of brokeness and bad life choices. sorry for the outburst I just wanted to back up what SamYaple was talking about. 17:19:03 ill get you guys something on the ML by friday 17:19:19 cloudnull: so last topic 17:19:24 2.0 will need much love before its production ready 17:19:34 cloudnull: yea thats what we want to, but the docker module is pretty crucial to us 17:20:12 anyone have anything else they need to talk abbout 17:20:15 for sure. they have some bits we need too, and we're looking at ways to backport (manually) to 1.9.x if we absolutly need 17:20:21 anyway cheers 17:20:27 cloudnull: thanks :) 17:20:39 thanks cloudnull 17:21:54 looks like nothing else 17:22:02 sdake has arrived! 17:22:02 o/ 17:22:05 #endmeeting