20:00:20 #startmeeting kolla 20:00:23 Meeting started Mon Feb 9 20:00:20 2015 UTC and is due to finish in 60 minutes. The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:24 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:26 The meeting name has been set to 'kolla' 20:00:36 #topic rollcall 20:00:39 here 20:00:44 here 20:00:48 steak here [\o/ 20:00:53 hello 20:00:57 hey ryan 20:01:00 hey daneyon 20:01:02 hey britt 20:01:02 hi 20:01:04 jpeeler make it ? 20:01:21 thanks for the ping, yeah 20:01:25 cool 20:01:30 glad you hang out in this channel :) 20:01:32 #topic agenda 20:02:05 #link https://wiki.openstack.org/wiki/Meetings/Kolla#Agenda_for_next_meeting 20:02:11 anyone have anything to add last minute? 20:02:29 nada 20:02:38 nothing 20:02:53 #topic review of super-privileged container approach 20:03:11 I don't know if everyone has had a chance to read the super privileged container spec 20:03:19 but it proposes a new direction for kolla 20:03:42 #link https://review.openstack.org/#/c/153798/ 20:03:46 i just performed another review and submitted it just b4 the meeting. I wanted to get your feedback on details for dealing with mysql data 20:04:10 in summary the proposal is to remove kubernetes as a dependency and focus on using docker only with the full docker API availlable 20:04:17 rhallisey have you had a chance to review the spec? 20:04:19 to perform complete seperation among the services, should we have a mysql container for each service? 20:04:25 I saw daneyons and jpeelers reviews 20:04:34 I agree to that approach. 20:04:37 sdake, ya I left a few comments 20:04:50 daneyon sounds cool, but complicated :) 20:04:57 we dont have to specify how we do that part tho 20:05:10 I just want general agreement on a change in focus 20:05:14 because we will have to implement it 20:05:26 without this change in focus, I'm not sure what else can be done with tthe current kolla implementation 20:05:52 I have alot of outstanding comments I see in the review, I'll submit an update today 20:06:11 what I'd like is for the four of us here from the core team unanimously agree to the specification 20:06:25 if you disagree, propose an alternative that is viable :) 20:06:36 I agree to the change in focus. 20:06:38 that means 4 +2 votes 20:06:45 on the specification 20:06:54 (mine is implicit in the review request:) 20:07:10 jpeeler/rhallisey able to review this over the next week and beat the spec into submission then? 20:07:47 I read it and I think it's a good idea 20:07:51 sdake: yeah i can do that. but from what i've seen, minor details are all that's left 20:08:04 +2 on the idea for me 20:08:08 cool, so hopefully we can get the spec approved this week - sounds like everyone is on board 20:08:25 #topic milestone #3 planning 20:08:27 I just +2'd 20:08:35 well it needs love yet daneyon :) 20:08:41 but thanks for the vote of confidence :) 20:08:47 for sure 20:09:09 so i would think milestone 3 is going to see big changes :-) 20:09:14 Ok, so since we are using this new approach, I think we need to start to define the blueprints that make up milestone #3 - which is defined as launching stuff via SPC 20:10:10 Would creating an easy ot use dev environment be at the top of the list? I don;t think kube-heat will be of use to us 20:11:08 daneyon_ I think we can easily create something based on old heat-kubernetes that just launches 3 VMs to run the various nodes 20:11:14 #link https://etherpad.openstack.org/p/kolla-blueprint-brainstorm 20:11:29 how is the networking going to work? 20:11:36 I'd like to brainstorm here in this etherpad, and I'll convert em into blueprints 20:11:43 --net=host, in other words using the host network stack 20:12:21 does that give each container a "real" ip? 20:12:27 jpeeler: I think we start of with nova-network like we did before 20:12:42 ya lets start with nova-network 20:12:53 jpeeler it gives all containers the same ip on the system 20:12:56 jpeeler: then we move to neutron, using either ovs or linucbridge + ML2 plugin 20:13:39 so the development env would be a dressed down all-in-one system? 20:13:47 i guess i'm thinking more about container communication 20:13:54 britthouser more or less 20:14:17 jpeeler the containers essentially use the host's network, so if they communicate its almost as if they were a process running in the host os 20:14:33 ok just trying to wrap my head around it, thanks 20:14:41 need more input on the blueprints - can folks start adding stuff to the etherpad ;-) 20:15:00 What about the containers we need that we don't yet have 20:15:08 rhallisey can you add those to the etherpad 20:15:38 testing? 20:15:43 sdake, sure, I'm sure what we don't have yet.. 20:15:52 * rhallisey looks 20:16:12 rhallisey read the spec - it has the desired container set - compare vs what we have 20:16:40 gotcha 20:16:42 jpeeler super-privileged containers are a raelly thin chroot essentially :) 20:16:56 so we have "develop new containers" 20:17:06 add that in and put some subheadings :) 20:17:20 what are the OS details that we want to run this stuff on... Fedora atomic? 20:17:32 atomic doesn't have git or any useful tools for development 20:17:33 Would the deployment scheme be one container per VM? or all containers on 3-VMs? or doesn't really matter? 20:17:35 I think we want to stick with f21 20:17:55 britthouser so we need a tool to deploy an individual container set 20:18:02 can you add that to the etherpad? 20:18:06 sorry, one more question: is it too tripleO unfriendly to use something like fig to bootstrap the environment? 20:18:18 then when your on a vm, you can run the tool 20:18:30 jpeeler I'd like to not complicate things early with new tools or systems if possible 20:18:41 but long term we can add things like fig or puppet 20:18:42 it's part of docker now as far as i know 20:18:45 I just don't know the right answer yet 20:18:57 can you give a brief overview of fig? 20:19:23 ha, not a competent one. i thought it was like "vagrant up" for containers. 20:19:30 is f21 for the container base image? 20:20:08 I would like to discuss the use of HA tools such as Pacemaker, Corosync, before we add those to the container list 20:20:12 bdastur good question, atm we use f21, but I'd like to go to Centos 20:21:05 sdake: https://www.orchardup.com/blog/fig if it's not in docker (i haven't checked), then not worth thinking about 20:21:40 daneyon_ if you want to have a discussion about HA, we should do in the spec imo 20:21:54 because those are specified in the spec directly 20:22:17 swift I think doens't work :) 20:22:54 sdake: Do you think that's an implementation detail? Some people may want to use corosync, others galera 20:23:19 galera is only for mysql iiuc 20:23:24 ? 20:23:25 we need galera too to do ha for mysql 20:23:32 what doesn't work in swift? 20:23:44 kolla containers for swift are busted notmyname 20:24:02 nothing of your fault - our fault :) 20:24:16 shouldnt galera be part of the same container as mysql 20:24:43 I actually managed to get two containers on two VMs running rabbitmq in a HA cluster 20:24:51 sdake: I guess in general, it have been involved in HA implementations that do not use the HA tools mentioned in the spec. I think HA may be outside the scope of the initial spec? 20:24:57 each logical service goes in a separate container, and shares the host as necessary via bind mounting /run or /var 20:25:25 ok we can drop HA, although I'd like to tackle HA of mysql and rabbit if possible 20:26:48 so lets have a more general discussion about ha quickly 20:26:50 sdake: I want to tackle it too. I think it must get done for anyone to use kolla in prod. we can work on an implementation. 20:26:56 do folks want to tackle ha in milestone 3 or later? 20:27:04 OK, HA? 20:27:57 we know we want ha for mysql via galera and rabbitmq 20:28:02 maybe we should just start with those 20:28:05 better to validate HA earlier to make sure there are no obstacles we did not anticipate 20:28:09 although I like the check script idea 20:28:59 ok, well lets do this 20:29:05 galera and rabbit seem to be the most popular (or at least most talked about) ways of doing HA, so that is a good starting point. 20:29:06 lets assume we are going to implement everything in the spec 20:29:25 and write out the blueprints assuming we are doing that 20:29:35 and if any part of the spec changes, we can just "Not" the blueprint and forget about it ;-) 20:29:45 (between now and approval of the spec that is) 20:29:52 an initial HA implementation could be 1. HAProxy for API endpoints, MySQL VIP. 2. Galera for clustering the MySQL DB 3. Standard Rabbit Clustering.... implementation specific would need to get worked out. 20:30:42 ok, what runs the container check script and restarts it if busted? 20:30:48 I guess we need a tool for that! 20:31:01 then we can possibly remove corosync and pacemaker 20:31:15 sdake: if we're going to cluster the DB and MQ, we need something for the API endpoints, DB VIP > HAProxy for some other OS SLB 20:31:34 wow too many acronyms my brain just imploded ;-) 20:31:43 SLB = ? 20:31:54 server load balancer 20:31:58 sdake: How does the script know if the container is busted? 20:32:11 it runs a check script in the container via docker exec 20:32:16 SLB- Server Load Balancer, HAProxy, F5, etc.. 20:32:19 if the check script returns 0 - gtg, returns -1, restart 20:32:59 the check script can do some form of healthcheck on the softwre in the container 20:33:15 Probably need some intelligence in that script. If it dies 3x, then don't restart or something like that. 20:33:25 is HA proxy any good? Or is there a better tool 20:33:29 britthouser that is called escalation 20:33:38 at that point, you would want to reset the machine 20:33:43 but lets not think about escalation now 20:33:46 sdake: is the check script looking to see if, for example, that a test tenant can talk to the Ketystone API and create a user, endpoint, etc? 20:33:47 Ok. 20:33:49 lets assume its dumb and smiple :) 20:34:04 right, that would be a keystone-api check script 20:35:06 We need something to check the health of the machine, but I think that is above the line we care about 20:35:12 that is what pacemaker + corosync tackle 20:35:25 we only care about health of containers 20:35:31 in a real system, someone will have to sort out health of the bare metal as well 20:35:42 sdake: instead of managing our own scripts for health checking, their is a tool (trying to remember the name) that can run as a process within each container to perform deep health checking. 20:36:04 Ok...so corosync+pacemaker aren't implementing the HA between openstack services, just the containers themselves. I misunderstood that earlier. 20:36:06 it health checks openstack specifics? 20:36:11 Let me dig up the name and I'll send it along. 20:36:25 put a link in the etherpad 20:36:29 yes. 20:36:59 britthouser I removed pacemaker+corosync, I think they are not necessary for container management 20:37:10 i'll dig it up.... in the meantime, lets not set it in stone that we are going to create our own shell scripts from scratch to perform health checking. 20:37:13 corosync + pacemaker manage a group of machines, we are only talking a single machine 20:37:17 will do 20:37:43 * britthouser rereads... 20:38:05 we can run health checks son each of the cluster tools, HAProxy, Galera, etc.. 20:38:21 I'm with you now sdake 20:38:31 I don;t see a need for corosync or pacemaker at this point. I'm open to hear from others on the need though 20:39:01 if we need something to monitor health of machines and restart them, that is where pacemaker and corosync come in 20:39:43 health check monitoring though monit #link http://mmonit.com/monit/ 20:41:06 license? 20:41:25 open source 20:41:34 which one :) 20:41:41 AGPL 20:41:50 gaahh 20:42:00 who would use that license! 20:42:09 that one makes lawyers cringe for some reason 20:43:08 perhaps we can just put monit in a container if it does the job 20:43:14 run with --pid=host 20:43:35 for now, I say we need to investigate the best way to perform health checking in each container and across the container cluster. Creating our own scripts from scratch should be the last option... I'm open to other tools. I just mention Monit because I used it back when i was doing customer deployments and it worked well 20:43:40 so launch would like like docker exec blah monit x 20:43:58 cool i'll play with it today daneyon_ 20:44:02 looks small enough 20:44:23 +1 on avoiding using our own scripts 20:44:44 ya we want our scripts to be simple - less then 100 lines if possible 20:44:54 although not always posible inside a container 20:45:20 Monit needs to be able to restart the pid of the service it's monitoring 20:45:25 ok, i'll spec monit after I give it a go 20:45:47 daneyon_ it would be able to do that with --pid=host 20:46:03 sdake: roger that 20:46:28 any other suggested blueprints? 20:46:32 what about container logging? 20:46:43 that needs to go in the spec if you want it :) 20:46:59 maybe we can defer that to a later spec 20:47:07 if we're going to address monitoring the services, should we dev a logging solution? 20:47:09 I have no idea how to do the job on that point 20:47:33 I like logging to stdout personally :) 20:47:42 and have some tool capture them via docker log 20:47:51 OK, we can address logging in a follow-on milestone 20:48:01 or a followon spec in this milestone as well 20:48:06 so dates 20:48:15 I think we have beat the etherpad into pretty good shape 20:48:25 I'll not convert to blueprints until we have approved the spec 20:48:33 so if you want, feel free to edit as you see fit 20:49:15 #link https://wiki.openstack.org/wiki/Kilo_Release_Schedule 20:49:49 March 19 is k3, I think it makes alot of sense to align with the OpenStack project's release schedule 20:50:05 and that is about 4 weeks after we wrap up our planning 20:50:25 yah/ney? :) 20:50:44 sounds good 20:50:52 yah 20:51:13 cool sounds good then no nays :) 20:51:16 #topic open discussion 20:51:25 we have 9 minutes but we had alot of open discussion already 20:51:32 seems like pepole are fired up for this new approach to me :) 20:51:40 ya! 20:51:50 Huzzah! 20:51:53 Another monitoring option #link https://github.com/stackforge/monitoring-for-openstack 20:52:03 sdake, when we get there, I can write us the selinux policy needed whne using the super privlaged containers 20:52:11 not much dev lately, but definitly better than starting from scratch 20:52:13 rhallisey that would totally rock! 20:53:45 ok anything else? 20:53:49 or shall we end the meeting 20:54:12 danyeon_ those monitoring scripts look interesting, would work well as check scripts 20:54:26 time for lunch. thx. 20:54:32 #endmeeting