20:00:25 <sdake> #startmeeting kolla
20:00:25 <openstack> Meeting started Mon Mar 16 20:00:25 2015 UTC and is due to finish in 60 minutes.  The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot.
20:00:26 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
20:00:28 <openstack> The meeting name has been set to 'kolla'
20:00:32 <sdake> #topic rollcall
20:00:36 <sdake> \o,
20:00:46 <rhallisey> hi
20:01:09 <Slower> o/
20:01:13 <daneyon_> Hi!
20:01:18 <Slower> daneyon_!
20:01:25 <sdake> #topic agenda
20:01:27 <shardy> o/
20:01:45 <sdake> #link https://wiki.openstack.org/wiki/Meetings/Kolla
20:01:58 <sdake> so shardy has joined us from the heat community to talk about fig integration
20:02:03 <sdake> anyone have anything to add to the agenda?
20:02:32 <sdake> #topic TripleO Heat Integration
20:02:55 <sdake> we have spent the last milestone essentially making kolla consumable by third parties
20:03:01 <sdake> previously ti was not consumable
20:03:14 <sdake> it had a dependency on k8s and had no way to launching the containers indendepntly
20:03:26 <sdake> we have fixed that with fig, and our idea of running the containers on bare metal rather than in k8s
20:03:48 <sdake> shardy for your benefit, we have done a significant about of docker-compose work to make kolla integrateable by third parties
20:03:54 <sdake> on that note, shardy take it away :)
20:04:02 <shardy> sdake: Ok, thanks
20:04:23 <shardy> so really, I just wanted to make folks aware of the recently revived efforts to make container deployment via heat possible
20:04:53 <shardy> ramishra has been doing some work making heat SoftwareConfig hooks to enable launching docker-compose containers via heat:
20:05:06 <shardy> #link https://review.openstack.org/#/c/160642/
20:05:14 <shardy> #link https://review.openstack.org/#/c/164572
20:05:25 <sdake> does that review understand --env feature?
20:05:34 <shardy> this is just a first step, but we're very keen to engage the kolla community for feedback
20:06:25 <sdake> shardy I think looking from the perspective of the tripleo community at kolla over the last 5 months, I would probably be in the position of "that looks neat, how do I integrate with it."
20:06:33 <sdake> Do you think that problem still exists with thenew fig deployment model hitting our repo recently?
20:06:56 <shardy> sdake: yup, I'm not sure about env, that's the sort of feedback we're looking for
20:07:23 <shardy> e.g how can we make this integrate well with the work the kolla community is doing, with a view to e.g containerized TripleO in future
20:07:40 <sdake> i.e. but yup :)
20:08:09 * sdake apologizes for red pen policing a meeting ;)
20:08:09 <shardy> we've added a bunch of good abstraction points to the tripleo heat templates lately, initially to allow encapsulation of puppet config, but the idea is you can plug in whatever implementation in future
20:08:45 <sdake> anyone in the kolla community have any easy or hard qs for shardy?
20:08:57 <daneyon_> shardy: can you help us understand the fig support by providing an example? So, would I create a heat template that specifies a nova instance with the fig services that I want that instance to run?
20:10:00 <shardy> daneyon_: Good question, basically we're looking at two models, one where heat spins up a pre-built image via nova/ironic to run as the docker host, then deploy containers via SoftwareConfig
20:10:30 <sdake> pre-buit os image I assume you mean?
20:10:31 <shardy> the other approach is using Fedora atomic images, and hosting the agents needed for the heat integration in a container, which launches other containers
20:10:49 <shardy> sdake: yup, or heat can bootstrap a vanilla image
20:11:03 <inc0> shardy, do we want to create kolla-container resource?
20:11:08 <shardy> https://review.openstack.org/#/c/160642/6/hot/software-config/example-templates/example-docker-compose-template.yaml
20:11:09 <daneyon_> shardy: In model #1, so then would fig be one of the supported software config backends?
20:11:11 <shardy> here's an example
20:11:16 <inc0> that might be another alternative
20:11:25 <sdake> inc0 I think a standard fig container resource would be ideal
20:11:35 <shardy> inc0: IMO, no - the SoftwareConfig abstractions already provide a clean (and more flexible) interface IMO
20:11:53 <shardy> group: docker-compose
20:12:26 <shardy> that is all you need in the template to trigger the docker deployment, and all the complexity of knowing how to do it is contained in the hook script
20:13:05 <daneyon_> shardy: thx for sharing the template. That's pretty much what I was picturing, but it helps to see it for real
20:13:21 <shardy> inc0, sdake: there's no reason you couldn't implement a provider resource containing the SoftwareConfig stuff, if you wanted to hide the plumbing of how it's done
20:13:52 <shardy> but IMO the complexity and maintenance cost of a python plugin is not warranted (although one does exist for docker already)
20:13:57 <sdake> shardy we provide fig.ymls, would those need to be encoded in heat dsl?
20:14:17 <shardy> sdake: No, you can just reference them from the template with get_file: fig.yml
20:14:25 <sdake> rught thx
20:14:34 <shardy> python heatclient can then just pull the fig yaml in and pass it to heat
20:14:51 <daneyon_> Then kolla = the container content
20:14:53 <shardy> heat then does the transport to get it onto the host, and the hook script launches it
20:15:00 <shardy> daneyon_: exactly
20:15:19 <shardy> https://review.openstack.org/#/c/160642/6/hot/software-config/elements/heat-config-docker-compose/install.d/hook-docker-compose.py
20:15:25 <sdake> sounds like a viable integration path
20:15:37 <shardy> that's the hook script which runs on the host, basically just runs docker-compose, very simple atm
20:16:12 <daneyon_> The kolla becomes the container content... ie group: docker-compose: config: db: kollaglue/db-container
20:16:19 <shardy> Does anyone see any glaring pitfalls in this approach, or problems which will prevent us consuming the container content from kolla?
20:16:28 <sdake> we need env support
20:16:37 <sdake> i don't see how that works with get_file as described
20:16:39 <Slower> pid=host, net=host, priveleged
20:16:42 <sdake> maybe env needs to be a property or something
20:16:48 <Slower> pid=host isn't supported in stock upstream fig yet tho
20:17:03 <shardy> sdake: Deployment resources can take inputs - env is key/value pairs right?
20:17:16 <sdake> right key/value, but in a specific format known to compose only
20:17:27 <rhallisey> env is the environment variable file
20:17:28 <Slower> I wonder if env files would be supported?
20:17:45 <sdake> #link https://review.openstack.org/#/c/164834/
20:17:53 <sdake> shardy skim quickly ^ ;)
20:18:09 <Slower> env_file:
20:18:15 <Slower> filename.env
20:18:25 <sdake> env_file is a second get_file needed int he resource
20:18:43 <Slower> yeah
20:19:00 <Slower> sdake: think that would 'just work' then?
20:19:11 <sdake> i think if it was implemented yes :)
20:19:30 <sdake> from what I can tell it is not presently but it doesn't seem like all that hard
20:19:40 <daneyon_> shardy: after Heat orchestrates the fig deployment. Do operators simply use the fig/Docker commands for ongoing management of the services?
20:20:25 <shardy> daneyon_: they could, but generally the idea is you manage the whole lifecycle of things created by heat with heat
20:20:53 <shardy> e.g particularly update/delete of things, or heat won't know about any out-of-band management that happens directly
20:21:23 <shardy> the nice thing is heat can do stuff like transparently scale out containers, do batched rolling upgrades, things like that
20:21:37 <shardy> which is why I'm pretty exited by the integration possibilities here :)
20:22:01 <daneyon_> shardy: Is the ongoing mgt through heat currently supported or will that follow on?
20:22:04 <shardy> sdake, Slower: I'll take an action to investigate the env thing and report back (next week if you like)
20:22:19 <shardy> ramishra is the expert on this, and unfortunately he's probably asleep ;)
20:22:25 <sdake> sounds good shardy
20:22:33 <Slower> shardy: sounds good
20:22:37 <shardy> #action shardy to investigate env support in heat hook
20:22:44 <daneyon_> shardy: and fig can scale containers, so would that feature be disabled?
20:23:01 <shardy> daneyon_: how does fig scale containers over multiple hosts?
20:23:03 <Slower> I don't think we need it in this use case
20:23:07 <shardy> e.g what orchestrates that?
20:23:17 <Slower> fig only manages the one host and we only need one instance of whatever services
20:23:20 <sdake> shardy fig doesn't do multi-host
20:23:27 <stevebaker> (hi)
20:23:41 <sdake> hey stevebaker
20:23:45 <Slower> stevebaker!!
20:23:49 <sdake> unfortuantely i think you ahve missed he conversation :)
20:23:50 <inc0> good evening Steve
20:23:58 <sdake> anyone else have Qs to ask?
20:24:00 <sdake> or shall we move on?
20:24:27 <daneyon_> shardy: It doesn't scale over multiple hosts today, but I'm sure that's on Docker In'c agenda with Swarm. Just scales on the same host doesn't make a whole lot of sense. It can just be confusing if fig supports scaling and so does Heat
20:24:28 <sdake> I think in conclusion, everyone has done a bangup job in the last m2->m3 cycle of making fig consumable by third party projects
20:24:36 <shardy> daneyon_: re the ongoing mgt, we've got the basic crud support going in now, I'm sure there will be further enhancements to come later
20:24:37 <sdake> party like its 1999
20:25:04 <Slower> meeting topic: the awesome
20:25:10 <stevebaker> did the compose hook land?
20:25:12 <shardy> daneyon_: Ok, that's good feedback, sounds like some overlap to be aware of and potentially resolve then
20:25:17 <shardy> stevebaker: yup
20:25:22 <stevebaker> cool
20:25:26 <sdake> i htink docker scaleout can be ignored
20:25:28 <shardy> https://review.openstack.org/#/c/160642
20:25:29 <sdake> if heat handles it
20:25:50 <sdake> i'd rather not involve swarm in the mess we already ahve ;)
20:26:09 <shardy> daneyon_: I don't think anyone is saying heat should be the only way to scale out, merely that it's quite a need integration point and it fits well with the existing TripleO abstractions
20:26:10 <sdake> pretty sure my brain is at 100 ATM atm :)
20:26:14 <daneyon_> sdake: agreed, just wanted to make shardy aware of it.
20:26:16 <shardy> s/need/neat
20:26:39 <shardy> Ok, well thanks guys, I won't take up any more of your meeting ;)
20:26:46 <sdake> thanks shardy for coming
20:26:51 <shardy> feel free to come chat to us in #heat if there are other questions
20:26:55 <sdake> its good to know the triploe community is actually interested in our work ;)
20:27:00 <rhallisey> shardy, thanks
20:27:15 <shardy> sdake: yup, very interested! :)
20:27:34 <sdake> #topic improving docker-compose integration
20:27:42 <sdake> ok so we have about 6 fig files
20:27:45 <sdake> we have about 12 containers
20:27:53 <sdake> we either need to delete containers or add fig files
20:27:56 <sdake> whats it gonna be ;)
20:28:34 <rhallisey> add fig files
20:28:38 <sdake> +1
20:28:52 <Slower> haha
20:28:55 <Slower> ya +1 :)
20:29:03 <daneyon_> add figs
20:29:09 <rhallisey> the reason being is that nova-conductor create the db and nova-api creates the keystone user
20:29:10 <sdake> soudns like its unanimous
20:29:14 <Slower> although I think we should focus on getting the ones we have to work solidly
20:29:19 <sdake> agree
20:29:24 <rhallisey> so I can see other race conditions happening
20:29:30 <sdake> so we need to add to the agenda the topic of "when" we do this work
20:29:40 <sdake> but we all agree it needs to be done
20:29:46 <sdake> #topic adding HA features to kolla
20:30:12 <sdake> the masses want rabbitmq in ha mode, galera in ha mode, and load balancing in ha mode
20:30:25 <sdake> these are 3 containers we will need to sort out in the coming months
20:30:25 <sdake> or ignore ha alltogether
20:30:37 <sdake> I'd say HA is our #2 priority after fig completion
20:30:44 <sdake> what do other folks say?
20:31:02 <rhallisey> give the people what they want :p
20:31:19 <sdake> you will get a black car and like it! :)
20:31:31 <rhallisey> :)
20:31:43 <sdake> again another topic for "when"
20:31:52 <sdake> since we can't do it immediatley in this m3 cycle
20:32:03 <sdake> jpeeler about?
20:32:26 <sdake> since jpeeler is out, I'll skip the CI for this week
20:32:27 <jpeeler> oh hi
20:32:30 <daneyon_> HA is important
20:32:32 <sdake> oh hey :)
20:32:32 <Slower> maybe i'm crazy but I think minimal functioning container set is #1
20:32:34 <rhallisey> sdake, so but increasing the # of fig files does it mean we are abandoning the idea of container sets
20:32:45 <sdake> rhallisey whatever works
20:33:05 <sdake> I like container sets wher ethey fit - like nova compte or conudcotr
20:33:11 <rhallisey> sdake, maybe your blueprint will need a change then
20:33:13 <Slower> minimal functioning containerized openstack install working I mean
20:33:18 <sdake> ya the spec needs changing
20:33:28 <sdake> slower ack that, we need that done so that is priority #0
20:33:36 <sdake> although iI am pretty sure we are close there
20:33:36 <Slower> ok I like that :)
20:33:38 <rhallisey> sdake, that's true, some of them can be made into set
20:33:40 <rhallisey> sets
20:33:54 <sdake> a set can be 1 or more things
20:33:54 <jpeeler> sdake: i'm here - but i don't have an update as i haven't worked on the testing any. when is this milestone over again?
20:33:58 <sdake> so indeed they can be sets :)
20:34:08 <sdake> 19th of march is the deadline
20:34:25 <sdake> #topic milestone #3 planning
20:34:26 <sdake> so march 19th is our milestone daedline
20:34:34 <sdake> I think we are really close
20:34:45 <sdake> we should be focused on making the fig.ymls we have published work with the containers we have today
20:34:49 <sdake> and visa-versa
20:35:03 <sdake> and assuming all that is correct, I'll cut the m3 branch on the 19th
20:35:13 <sdake> so that is priority #0, lets focus on that
20:35:26 <sdake> make sure fig + containers we have today work + make sure what we have is documented
20:35:31 <rhallisey> Slower and I have it working on our own images so will push and rebuild those images
20:35:48 <sdake> lets make the upstream imags work rhallisey
20:35:53 <rhallisey> yes
20:35:55 <sdake> figure out the diffs
20:35:59 <sdake> and get em submitted
20:36:03 <rhallisey> that's what I'm saying
20:36:06 <sdake> cool
20:36:09 <sdake> sounds good then :)
20:36:25 <sdake> I am going to bounce all the other stuff into milestone #4
20:36:28 <sdake> which will land prior to ODS
20:36:44 <sdake> April 30th sound good for a date?
20:37:03 <Slower> sure
20:37:08 <daneyon_> sdake: re; priority 0, I would like to discuss this: https://review.openstack.org/#/c/159004/
20:37:12 <rhallisey> sounds good
20:37:23 <daneyon_> sdake: Let me know if you would like to dicuss during this meeting or offline
20:37:39 <Slower> daneyon_: yes
20:37:45 <sdake> lets discuss now daneyon_
20:37:52 <sdake> any blockers queue up
20:38:01 <Slower> s/fedora/centos/ ?
20:38:06 <daneyon_> OK. Pls take a look at my last reponse and help me better understand your concerns
20:38:23 <Slower> I think we should do all centos and then we can do a patch to allow others
20:38:40 <Slower> which actually is another topic, how to do that well
20:38:41 <sdake> quoting you daneyon:
20:38:44 <sdake> If we do not use a data container, then we simply host mount the required DIRs within the app container as we have always done. In that case, if you rm the fig composed service, then the data is gone in that scenario.
20:38:55 <sdake> the host woudl still have the database on it right?
20:39:02 <daneyon_> I see the db-app/db-data container set as a perfect example for using the 2 under a sinfgle fig yml... treating them a single service
20:39:04 <sdake> so when the container starts up again, even after a docker rm, itwould be read
20:39:11 <Slower> that's what we're doing and it persists
20:39:42 <sdake> slower you mean that container in that review persists?
20:39:46 <sdake> when I ran it, it did not persist
20:40:01 <daneyon_> sdake: Using the standard data mgt method, the data would be gone if you did a fig rm.
20:40:03 <sdake> all I care about is the data persists
20:40:04 <Slower> sdake: I haven't tried that one
20:40:20 <sdake> but if you bind mount - it is stored on the host
20:40:25 <sdake> this is theo nly way I know to get data to persist
20:40:36 <daneyon_> what are best practices for managing individual containers within a fig managed environment?
20:40:53 <Slower> we are running with -v /var/lib/mysql and it presists
20:40:55 <sdake> daneyon_ I think docker-compose is so green we are making up best practices
20:40:59 <Slower> on a plane single container
20:41:02 <Slower> plain
20:41:22 <daneyon_> a service = multiple containers. you want to upgrade portions of the service. How should this be done when using fig?
20:41:25 <sdake> ya taht is a host bind mount
20:41:55 <sdake> we haven't really sorted out upgrade other then compose pull compose stop compose rm compose up
20:42:10 <sdake> I have tested this works with nova-compute but not the db containers
20:42:10 <Slower> yeah if you just add the bind mount to https://review.openstack.org/#/c/163953/2 it'd persist
20:42:33 <sdake> daneyon_ any objection to adding the bind mount then?
20:42:53 <Slower> daneyon_: I must confess I do not understand the idea of splitting them up like that?
20:42:56 <Slower> what do you gain?
20:43:12 <sdake> one is responsible for execution one is responsible for storage of data
20:43:15 <sdake> it makes sense to me
20:43:25 <sdake> the part that doens't make sense is the nonpersistence :)
20:44:02 <sdake> this is one of the things that killed k8s for us - no host bindmounting
20:44:04 <daneyon_> Slower: So, you run a container with the mysql bind mount to the host. You stop and destroy the container, start a new container with the mysql bind mount and it has the same data?
20:44:07 <sdake> specifically for the database
20:44:17 <sdake> daneyon_ 10-4
20:44:21 <Slower> daneyon_: yes
20:44:27 <Slower> daneyon_: because all the data is on the host fs
20:45:57 <Slower> tbh I'm not sure why you'd add the complexity if you aren't really getting anything from it
20:46:01 <daneyon_> Slower: If you need to perform maintenance on the host, then the the container is down... you loose a key feature of containers... portability.
20:46:03 * Slower likes simple
20:46:39 <Slower> I see so you would just keep that container running forever.. ?
20:46:42 <sdake> interesting thinking daneyon_
20:46:43 <Slower> what about reboots?
20:47:00 <sdake> but I dont think we want to encourage portability in our design methods
20:47:24 <sdake> the work of figuring out on the host which needs to be copied to a new host for maintenance is an unsolved problem
20:47:30 <sdake> but I'm not sure we can solve it in a meaningful way
20:47:49 <daneyon_> Slower: The thought of seperating the two was more from an ongoing operations standpoint. If I wanted to upgrade my mysql app and something goes wrong, I could corrupt or loose my data within the container.
20:48:19 <sdake> here comes container checkpointing ;)
20:48:23 <Slower> haha
20:48:57 <Slower> I'm going to say.. the db should be the one being reliable? :)
20:49:04 <sdake> danyeon_ if you can make the container persistent between kills, I'm open to having multipel containers
20:49:47 <Slower> I wonder if a truly robust installation wouldn't have a real dedicated db system
20:50:13 <sdake> a robust installation system would have dbs running in containers in ha fashion
20:50:21 <daneyon_> sdake: I'll go back to a single db container since we have other pressing needs.
20:50:25 <sdake> and a way to back em up on the fly
20:50:36 <sdake> daneyon_ two containers is fine, just bindmount the data container
20:50:57 <sdake> but agree re pressing needs
20:51:11 <daneyon_> sdake: That's what I do and then the app containers pulls it's vols from the db-data container
20:51:13 <sdake> anything else out there lurking that needs addressing?
20:51:35 <sdake> but when it pulls the vols form the db_data container, have the db_data container bindmount
20:51:42 <sdake> that way its pulling its vols from the host os
20:51:44 <Slower> how are we going to address fedora vs centos containers?
20:51:47 <sdake> maybe that does or doesn't work, I don't know
20:51:54 <Slower> It would be nice to not duplicate the yml files
20:51:55 <daneyon_> sdake: maybe all i need to do is specify the source of the bind mount in the db-data container
20:51:56 <sdake> slower i htink for now we just say "Centos is the way to go"
20:52:08 <sdake> danyeon_ ack sounds like that should work
20:52:17 <daneyon_> sdake: could you give that a try?
20:52:28 <sdake> daneyon_ provide a patch I'll test
20:52:39 <daneyon_> sdake: will do
20:52:41 <sdake> can folks clean up the review queue today
20:52:46 <sdake> and hammer all the reviews ou t that are pending
20:52:55 <sdake> so we can start m0ar testing
20:53:12 <Slower> yeah
20:53:14 <Slower> sounds good
20:53:25 <sdake> #topic open conversation
20:53:27 <sdake> we have 7 minutes :)
20:54:16 <rhallisey> Ian and I started a #kolla channel on freenode
20:54:34 <sdake> should register it with openstack
20:54:36 <rhallisey> if you wanted to discuss there
20:54:37 <rhallisey> ok
20:54:38 <sdake> so it doesn't get ninja hijacked
20:54:53 <rhallisey> sounds good
20:54:55 <sdake> there are instructions in the google search somewhere :)
20:55:02 <rhallisey> I'll find it
20:55:06 <rhallisey> join #kolla !
20:55:24 <jpeeler> apologies on being absent earlier - catching up on the meeting. to summarize, we're thinking long term kolla will integrate with tripleO now?
20:55:35 <sdake> jpeeler seems plausible
20:56:42 <sdake> ok folks well thanks for coming!
20:56:43 <sdake> #endmeeting