20:00:25 #startmeeting kolla 20:00:25 Meeting started Mon Mar 16 20:00:25 2015 UTC and is due to finish in 60 minutes. The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:26 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:28 The meeting name has been set to 'kolla' 20:00:32 #topic rollcall 20:00:36 \o, 20:00:46 hi 20:01:09 o/ 20:01:13 Hi! 20:01:18 daneyon_! 20:01:25 #topic agenda 20:01:27 o/ 20:01:45 #link https://wiki.openstack.org/wiki/Meetings/Kolla 20:01:58 so shardy has joined us from the heat community to talk about fig integration 20:02:03 anyone have anything to add to the agenda? 20:02:32 #topic TripleO Heat Integration 20:02:55 we have spent the last milestone essentially making kolla consumable by third parties 20:03:01 previously ti was not consumable 20:03:14 it had a dependency on k8s and had no way to launching the containers indendepntly 20:03:26 we have fixed that with fig, and our idea of running the containers on bare metal rather than in k8s 20:03:48 shardy for your benefit, we have done a significant about of docker-compose work to make kolla integrateable by third parties 20:03:54 on that note, shardy take it away :) 20:04:02 sdake: Ok, thanks 20:04:23 so really, I just wanted to make folks aware of the recently revived efforts to make container deployment via heat possible 20:04:53 ramishra has been doing some work making heat SoftwareConfig hooks to enable launching docker-compose containers via heat: 20:05:06 #link https://review.openstack.org/#/c/160642/ 20:05:14 #link https://review.openstack.org/#/c/164572 20:05:25 does that review understand --env feature? 20:05:34 this is just a first step, but we're very keen to engage the kolla community for feedback 20:06:25 shardy I think looking from the perspective of the tripleo community at kolla over the last 5 months, I would probably be in the position of "that looks neat, how do I integrate with it." 20:06:33 Do you think that problem still exists with thenew fig deployment model hitting our repo recently? 20:06:56 sdake: yup, I'm not sure about env, that's the sort of feedback we're looking for 20:07:23 e.g how can we make this integrate well with the work the kolla community is doing, with a view to e.g containerized TripleO in future 20:07:40 i.e. but yup :) 20:08:09 * sdake apologizes for red pen policing a meeting ;) 20:08:09 we've added a bunch of good abstraction points to the tripleo heat templates lately, initially to allow encapsulation of puppet config, but the idea is you can plug in whatever implementation in future 20:08:45 anyone in the kolla community have any easy or hard qs for shardy? 20:08:57 shardy: can you help us understand the fig support by providing an example? So, would I create a heat template that specifies a nova instance with the fig services that I want that instance to run? 20:10:00 daneyon_: Good question, basically we're looking at two models, one where heat spins up a pre-built image via nova/ironic to run as the docker host, then deploy containers via SoftwareConfig 20:10:30 pre-buit os image I assume you mean? 20:10:31 the other approach is using Fedora atomic images, and hosting the agents needed for the heat integration in a container, which launches other containers 20:10:49 sdake: yup, or heat can bootstrap a vanilla image 20:11:03 shardy, do we want to create kolla-container resource? 20:11:08 https://review.openstack.org/#/c/160642/6/hot/software-config/example-templates/example-docker-compose-template.yaml 20:11:09 shardy: In model #1, so then would fig be one of the supported software config backends? 20:11:11 here's an example 20:11:16 that might be another alternative 20:11:25 inc0 I think a standard fig container resource would be ideal 20:11:35 inc0: IMO, no - the SoftwareConfig abstractions already provide a clean (and more flexible) interface IMO 20:11:53 group: docker-compose 20:12:26 that is all you need in the template to trigger the docker deployment, and all the complexity of knowing how to do it is contained in the hook script 20:13:05 shardy: thx for sharing the template. That's pretty much what I was picturing, but it helps to see it for real 20:13:21 inc0, sdake: there's no reason you couldn't implement a provider resource containing the SoftwareConfig stuff, if you wanted to hide the plumbing of how it's done 20:13:52 but IMO the complexity and maintenance cost of a python plugin is not warranted (although one does exist for docker already) 20:13:57 shardy we provide fig.ymls, would those need to be encoded in heat dsl? 20:14:17 sdake: No, you can just reference them from the template with get_file: fig.yml 20:14:25 rught thx 20:14:34 python heatclient can then just pull the fig yaml in and pass it to heat 20:14:51 Then kolla = the container content 20:14:53 heat then does the transport to get it onto the host, and the hook script launches it 20:15:00 daneyon_: exactly 20:15:19 https://review.openstack.org/#/c/160642/6/hot/software-config/elements/heat-config-docker-compose/install.d/hook-docker-compose.py 20:15:25 sounds like a viable integration path 20:15:37 that's the hook script which runs on the host, basically just runs docker-compose, very simple atm 20:16:12 The kolla becomes the container content... ie group: docker-compose: config: db: kollaglue/db-container 20:16:19 Does anyone see any glaring pitfalls in this approach, or problems which will prevent us consuming the container content from kolla? 20:16:28 we need env support 20:16:37 i don't see how that works with get_file as described 20:16:39 pid=host, net=host, priveleged 20:16:42 maybe env needs to be a property or something 20:16:48 pid=host isn't supported in stock upstream fig yet tho 20:17:03 sdake: Deployment resources can take inputs - env is key/value pairs right? 20:17:16 right key/value, but in a specific format known to compose only 20:17:27 env is the environment variable file 20:17:28 I wonder if env files would be supported? 20:17:45 #link https://review.openstack.org/#/c/164834/ 20:17:53 shardy skim quickly ^ ;) 20:18:09 env_file: 20:18:15 filename.env 20:18:25 env_file is a second get_file needed int he resource 20:18:43 yeah 20:19:00 sdake: think that would 'just work' then? 20:19:11 i think if it was implemented yes :) 20:19:30 from what I can tell it is not presently but it doesn't seem like all that hard 20:19:40 shardy: after Heat orchestrates the fig deployment. Do operators simply use the fig/Docker commands for ongoing management of the services? 20:20:25 daneyon_: they could, but generally the idea is you manage the whole lifecycle of things created by heat with heat 20:20:53 e.g particularly update/delete of things, or heat won't know about any out-of-band management that happens directly 20:21:23 the nice thing is heat can do stuff like transparently scale out containers, do batched rolling upgrades, things like that 20:21:37 which is why I'm pretty exited by the integration possibilities here :) 20:22:01 shardy: Is the ongoing mgt through heat currently supported or will that follow on? 20:22:04 sdake, Slower: I'll take an action to investigate the env thing and report back (next week if you like) 20:22:19 ramishra is the expert on this, and unfortunately he's probably asleep ;) 20:22:25 sounds good shardy 20:22:33 shardy: sounds good 20:22:37 #action shardy to investigate env support in heat hook 20:22:44 shardy: and fig can scale containers, so would that feature be disabled? 20:23:01 daneyon_: how does fig scale containers over multiple hosts? 20:23:03 I don't think we need it in this use case 20:23:07 e.g what orchestrates that? 20:23:17 fig only manages the one host and we only need one instance of whatever services 20:23:20 shardy fig doesn't do multi-host 20:23:27 (hi) 20:23:41 hey stevebaker 20:23:45 stevebaker!! 20:23:49 unfortuantely i think you ahve missed he conversation :) 20:23:50 good evening Steve 20:23:58 anyone else have Qs to ask? 20:24:00 or shall we move on? 20:24:27 shardy: It doesn't scale over multiple hosts today, but I'm sure that's on Docker In'c agenda with Swarm. Just scales on the same host doesn't make a whole lot of sense. It can just be confusing if fig supports scaling and so does Heat 20:24:28 I think in conclusion, everyone has done a bangup job in the last m2->m3 cycle of making fig consumable by third party projects 20:24:36 daneyon_: re the ongoing mgt, we've got the basic crud support going in now, I'm sure there will be further enhancements to come later 20:24:37 party like its 1999 20:25:04 meeting topic: the awesome 20:25:10 did the compose hook land? 20:25:12 daneyon_: Ok, that's good feedback, sounds like some overlap to be aware of and potentially resolve then 20:25:17 stevebaker: yup 20:25:22 cool 20:25:26 i htink docker scaleout can be ignored 20:25:28 https://review.openstack.org/#/c/160642 20:25:29 if heat handles it 20:25:50 i'd rather not involve swarm in the mess we already ahve ;) 20:26:09 daneyon_: I don't think anyone is saying heat should be the only way to scale out, merely that it's quite a need integration point and it fits well with the existing TripleO abstractions 20:26:10 pretty sure my brain is at 100 ATM atm :) 20:26:14 sdake: agreed, just wanted to make shardy aware of it. 20:26:16 s/need/neat 20:26:39 Ok, well thanks guys, I won't take up any more of your meeting ;) 20:26:46 thanks shardy for coming 20:26:51 feel free to come chat to us in #heat if there are other questions 20:26:55 its good to know the triploe community is actually interested in our work ;) 20:27:00 shardy, thanks 20:27:15 sdake: yup, very interested! :) 20:27:34 #topic improving docker-compose integration 20:27:42 ok so we have about 6 fig files 20:27:45 we have about 12 containers 20:27:53 we either need to delete containers or add fig files 20:27:56 whats it gonna be ;) 20:28:34 add fig files 20:28:38 +1 20:28:52 haha 20:28:55 ya +1 :) 20:29:03 add figs 20:29:09 the reason being is that nova-conductor create the db and nova-api creates the keystone user 20:29:10 soudns like its unanimous 20:29:14 although I think we should focus on getting the ones we have to work solidly 20:29:19 agree 20:29:24 so I can see other race conditions happening 20:29:30 so we need to add to the agenda the topic of "when" we do this work 20:29:40 but we all agree it needs to be done 20:29:46 #topic adding HA features to kolla 20:30:12 the masses want rabbitmq in ha mode, galera in ha mode, and load balancing in ha mode 20:30:25 these are 3 containers we will need to sort out in the coming months 20:30:25 or ignore ha alltogether 20:30:37 I'd say HA is our #2 priority after fig completion 20:30:44 what do other folks say? 20:31:02 give the people what they want :p 20:31:19 you will get a black car and like it! :) 20:31:31 :) 20:31:43 again another topic for "when" 20:31:52 since we can't do it immediatley in this m3 cycle 20:32:03 jpeeler about? 20:32:26 since jpeeler is out, I'll skip the CI for this week 20:32:27 oh hi 20:32:30 HA is important 20:32:32 oh hey :) 20:32:32 maybe i'm crazy but I think minimal functioning container set is #1 20:32:34 sdake, so but increasing the # of fig files does it mean we are abandoning the idea of container sets 20:32:45 rhallisey whatever works 20:33:05 I like container sets wher ethey fit - like nova compte or conudcotr 20:33:11 sdake, maybe your blueprint will need a change then 20:33:13 minimal functioning containerized openstack install working I mean 20:33:18 ya the spec needs changing 20:33:28 slower ack that, we need that done so that is priority #0 20:33:36 although iI am pretty sure we are close there 20:33:36 ok I like that :) 20:33:38 sdake, that's true, some of them can be made into set 20:33:40 sets 20:33:54 a set can be 1 or more things 20:33:54 sdake: i'm here - but i don't have an update as i haven't worked on the testing any. when is this milestone over again? 20:33:58 so indeed they can be sets :) 20:34:08 19th of march is the deadline 20:34:25 #topic milestone #3 planning 20:34:26 so march 19th is our milestone daedline 20:34:34 I think we are really close 20:34:45 we should be focused on making the fig.ymls we have published work with the containers we have today 20:34:49 and visa-versa 20:35:03 and assuming all that is correct, I'll cut the m3 branch on the 19th 20:35:13 so that is priority #0, lets focus on that 20:35:26 make sure fig + containers we have today work + make sure what we have is documented 20:35:31 Slower and I have it working on our own images so will push and rebuild those images 20:35:48 lets make the upstream imags work rhallisey 20:35:53 yes 20:35:55 figure out the diffs 20:35:59 and get em submitted 20:36:03 that's what I'm saying 20:36:06 cool 20:36:09 sounds good then :) 20:36:25 I am going to bounce all the other stuff into milestone #4 20:36:28 which will land prior to ODS 20:36:44 April 30th sound good for a date? 20:37:03 sure 20:37:08 sdake: re; priority 0, I would like to discuss this: https://review.openstack.org/#/c/159004/ 20:37:12 sounds good 20:37:23 sdake: Let me know if you would like to dicuss during this meeting or offline 20:37:39 daneyon_: yes 20:37:45 lets discuss now daneyon_ 20:37:52 any blockers queue up 20:38:01 s/fedora/centos/ ? 20:38:06 OK. Pls take a look at my last reponse and help me better understand your concerns 20:38:23 I think we should do all centos and then we can do a patch to allow others 20:38:40 which actually is another topic, how to do that well 20:38:41 quoting you daneyon: 20:38:44 If we do not use a data container, then we simply host mount the required DIRs within the app container as we have always done. In that case, if you rm the fig composed service, then the data is gone in that scenario. 20:38:55 the host woudl still have the database on it right? 20:39:02 I see the db-app/db-data container set as a perfect example for using the 2 under a sinfgle fig yml... treating them a single service 20:39:04 so when the container starts up again, even after a docker rm, itwould be read 20:39:11 that's what we're doing and it persists 20:39:42 slower you mean that container in that review persists? 20:39:46 when I ran it, it did not persist 20:40:01 sdake: Using the standard data mgt method, the data would be gone if you did a fig rm. 20:40:03 all I care about is the data persists 20:40:04 sdake: I haven't tried that one 20:40:20 but if you bind mount - it is stored on the host 20:40:25 this is theo nly way I know to get data to persist 20:40:36 what are best practices for managing individual containers within a fig managed environment? 20:40:53 we are running with -v /var/lib/mysql and it presists 20:40:55 daneyon_ I think docker-compose is so green we are making up best practices 20:40:59 on a plane single container 20:41:02 plain 20:41:22 a service = multiple containers. you want to upgrade portions of the service. How should this be done when using fig? 20:41:25 ya taht is a host bind mount 20:41:55 we haven't really sorted out upgrade other then compose pull compose stop compose rm compose up 20:42:10 I have tested this works with nova-compute but not the db containers 20:42:10 yeah if you just add the bind mount to https://review.openstack.org/#/c/163953/2 it'd persist 20:42:33 daneyon_ any objection to adding the bind mount then? 20:42:53 daneyon_: I must confess I do not understand the idea of splitting them up like that? 20:42:56 what do you gain? 20:43:12 one is responsible for execution one is responsible for storage of data 20:43:15 it makes sense to me 20:43:25 the part that doens't make sense is the nonpersistence :) 20:44:02 this is one of the things that killed k8s for us - no host bindmounting 20:44:04 Slower: So, you run a container with the mysql bind mount to the host. You stop and destroy the container, start a new container with the mysql bind mount and it has the same data? 20:44:07 specifically for the database 20:44:17 daneyon_ 10-4 20:44:21 daneyon_: yes 20:44:27 daneyon_: because all the data is on the host fs 20:45:57 tbh I'm not sure why you'd add the complexity if you aren't really getting anything from it 20:46:01 Slower: If you need to perform maintenance on the host, then the the container is down... you loose a key feature of containers... portability. 20:46:03 * Slower likes simple 20:46:39 I see so you would just keep that container running forever.. ? 20:46:42 interesting thinking daneyon_ 20:46:43 what about reboots? 20:47:00 but I dont think we want to encourage portability in our design methods 20:47:24 the work of figuring out on the host which needs to be copied to a new host for maintenance is an unsolved problem 20:47:30 but I'm not sure we can solve it in a meaningful way 20:47:49 Slower: The thought of seperating the two was more from an ongoing operations standpoint. If I wanted to upgrade my mysql app and something goes wrong, I could corrupt or loose my data within the container. 20:48:19 here comes container checkpointing ;) 20:48:23 haha 20:48:57 I'm going to say.. the db should be the one being reliable? :) 20:49:04 danyeon_ if you can make the container persistent between kills, I'm open to having multipel containers 20:49:47 I wonder if a truly robust installation wouldn't have a real dedicated db system 20:50:13 a robust installation system would have dbs running in containers in ha fashion 20:50:21 sdake: I'll go back to a single db container since we have other pressing needs. 20:50:25 and a way to back em up on the fly 20:50:36 daneyon_ two containers is fine, just bindmount the data container 20:50:57 but agree re pressing needs 20:51:11 sdake: That's what I do and then the app containers pulls it's vols from the db-data container 20:51:13 anything else out there lurking that needs addressing? 20:51:35 but when it pulls the vols form the db_data container, have the db_data container bindmount 20:51:42 that way its pulling its vols from the host os 20:51:44 how are we going to address fedora vs centos containers? 20:51:47 maybe that does or doesn't work, I don't know 20:51:54 It would be nice to not duplicate the yml files 20:51:55 sdake: maybe all i need to do is specify the source of the bind mount in the db-data container 20:51:56 slower i htink for now we just say "Centos is the way to go" 20:52:08 danyeon_ ack sounds like that should work 20:52:17 sdake: could you give that a try? 20:52:28 daneyon_ provide a patch I'll test 20:52:39 sdake: will do 20:52:41 can folks clean up the review queue today 20:52:46 and hammer all the reviews ou t that are pending 20:52:55 so we can start m0ar testing 20:53:12 yeah 20:53:14 sounds good 20:53:25 #topic open conversation 20:53:27 we have 7 minutes :) 20:54:16 Ian and I started a #kolla channel on freenode 20:54:34 should register it with openstack 20:54:36 if you wanted to discuss there 20:54:37 ok 20:54:38 so it doesn't get ninja hijacked 20:54:53 sounds good 20:54:55 there are instructions in the google search somewhere :) 20:55:02 I'll find it 20:55:06 join #kolla ! 20:55:24 apologies on being absent earlier - catching up on the meeting. to summarize, we're thinking long term kolla will integrate with tripleO now? 20:55:35 jpeeler seems plausible 20:56:42 ok folks well thanks for coming! 20:56:43 #endmeeting