16:00:26 #startmeeting OpenStack-Ansible 16:00:26 Meeting started Thu Sep 15 16:00:26 2016 UTC and is due to finish in 60 minutes. The chair is mhayden. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:00:27 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:00:29 o/ 16:00:30 The meeting name has been set to 'openstack_ansible' 16:00:32 #topic Roll Call 16:00:37 \o 16:00:39 o/ 16:00:41 o/ 16:00:43 congrats to prometheanfire for getting in first! :P 16:00:44 \o 16:00:59 haha 16:01:11 hi 16:01:25 I didn't say 'FIRST' so it doesn't count 16:01:30 o/ 16:02:11 odyssey4me and his UK friends are out on the town causing trouble, so we may have a few less folks in here than normal ;) 16:02:13 I believe many of our UK cohort will be away todya. 16:02:22 ++ 16:02:34 o/ 16:02:39 this means that all bugs we talk about in today's meeting get assigned to them, right? 16:02:47 o/ 16:02:48 definitely 16:03:39 okay, we have a good group here, so let's get underway 16:03:50 #topic Action items from last week 16:03:56 #link http://eavesdrop.openstack.org/meetings/openstack_ansible/2016/openstack_ansible.2016-09-08-16.00.html 16:04:05 the reviews i brought up last week got merged 16:04:21 as did stevelle's Gnocchi identity patch 16:04:28 so i think we're good on those action items 16:04:36 yup 16:04:46 #topic Newton RC1 release 16:05:00 yupie 16:05:00 i know we're missing odyssey4me today -- is there anything to discuss on this topic? 16:05:39 i believe we need https://review.openstack.org/#/q/project:%255Eopenstack/openstack-ansible.*+starredby:jesse-pretorius+status:open 16:06:02 ah good call 16:06:05 and the gate is being mighty nasty 16:06:08 #link https://review.openstack.org/#/q/project:%255Eopenstack/openstack-ansible.*+starredby:jesse-pretorius+status:open 16:06:25 which is likely due to issues in the projects as they too scramble to get RC1 out 16:06:37 so recheck recheck recheck 16:06:46 if folks can help babysit it'd be great 16:07:13 I have a few stared items too 16:07:15 https://review.openstack.org/#/q/starredby:cloudnull+status:open,n,z 16:07:17 #link https://review.openstack.org/#/q/starredby:cloudnull+status:open,n,z 16:07:25 which would be nice to haves 16:07:33 ah, i'll gander at those after this mtg, cloudnull 16:07:46 anyone have anything else urgent on the topic of RC1 16:07:56 also I think https://review.openstack.org/#/c/365098/ 16:08:12 needs to be updated for doc review from robb 16:08:33 thats all i got 16:08:43 good stucc 16:08:49 s/stucc/stuff/ 16:08:57 we'll keep rollin' 16:09:08 #topic Octavia + OpenStack-Ansible 16:09:19 welcome to jorgem and johnsom from the land of octavia! :) 16:09:22 hey everyone 16:09:26 Hi there 16:09:28 nice to virtually meet you all 16:09:31 ;) 16:09:35 can y'all introduce yourselves and share a little bit of what you're interested in doing? 16:09:42 sure thing. 16:09:44 ohai 16:10:01 I am the current Octavia PTL 16:10:31 I'm Product Manager (former dev turned to the dark side) at Rackspace for CLoud Load Balancing and Cloud Networks. Have been mostly involved with Neutron LBaaS and Octavia 16:10:47 This is Nish I am in the OSIC team 16:10:58 Interested in getting LBaaS integrated into OSA 16:11:18 coolo 16:11:18 *cool 16:11:21 cause it's ready for primetime :) 16:12:03 Yes, newton will be the third Octavia release where it is the reference driver for neutron-lbaas. 16:12:07 Just wanted to hear from everyone what the best approach is. I heard we may need an LBaaS role of some sort? 16:12:15 we currently have LBaaSv2 running with an agent right now in Liberty, Mitaka, and Newton 16:12:44 it seems like we could get a new role together for octavia and run it in its own container 16:12:48 Right, that was my understanding as well. I think what is needed is to add support for the Octavia driver 16:13:13 and we'd need a toggle for deployers to choose between LBaaSv2+agent and LBaaSv2+octavia 16:13:57 awesome. Anything else we would need? 16:14:13 we will also need an image built and tagged properly in glance, correct? since octavia uses that image when it builds LB vm's? 16:14:38 that is my understanding but will defer to johnsom 16:15:01 also there's a management network in play that we will need to configure -- we have patterns for this in the tempest role now if i remember correctly 16:15:09 That is correct, we use diskimage-builder to create the image for the service vms. 16:15:24 The management network is probably the other major work item 16:15:26 jorgem / johnsom: the best way to start might be to make a spec and lay all this out for a review 16:15:36 https://github.com/openstack/openstack-ansible-specs 16:15:49 cool deal mhayden 16:16:07 i have one from liberty that you can feel free copy as much as you want -> http://specs.openstack.org/openstack/openstack-ansible-specs/specs/mitaka/lbaasv2.html 16:16:17 can octavia do multi-master(controller) yes? 16:16:19 *yet 16:16:20 i never made it all the way to octavia in that spec, just to lbaasv2+agent 16:16:57 Yes, all of the control plane processes are multi-instance friendly (and we strongly encourage it) 16:18:02 johnsom: but the active-active LB support is coming in ocata, right? 16:18:08 not sure if that's what cloudnull was asking 16:18:18 can you run more than more than one conroller 16:18:24 mhayden: Were there talks about the required provider network for octavia and security issues we had with that ? 16:18:35 BjoernT: yeah, we will need a plan for that 16:18:49 For the data side, the load balancers themselves, we currently support active/standby in mitaka and newton. Active/Active should land early in Ocata 16:19:03 the management/provider network will be something new, as is the image build itself -- so those will require some head scratching to get right 16:19:20 the rest is pretty standard openstack stuff 16:19:32 cloudnull Yes, we encourage all of the controller process to run multi-instance 16:19:35 we already have the neutron half of the project deployed 16:19:46 johnsom: multi-instance? 16:19:50 as in a VM? 16:20:01 or physical host 16:20:09 cloudnull: the LB (haproxy) itself runs inside VM's 16:20:14 cloudnull as in multiple python processes running on multiple nodes 16:20:49 I.e. the controller worker process should be running on three or more nodes 16:20:58 jorgem / johnsom: could y'all propose a spec with your proposed work items and we can get some reviews going in gerrit? 16:21:05 * mhayden said y'all :| 16:21:27 sure thing mhayden 16:21:38 so octavia can be installed on our typicall three neutron_agent containers and scheduelling can happen at all three at the same time? 16:21:42 mhayden: Are there other's already working on LBaaS items in OSA? 16:21:43 which is what I want to know 16:21:56 which is what I mean by "can you run more than more than one conroller " 16:22:07 jorgem: at the moment, not that i am aware of 16:22:15 k 16:22:27 johnsom: i think cloudnull is asking if we can run multiple octavia workers at the same time 16:22:31 am i right? 16:22:33 mhayden: Anyone who is interested that you know of? 16:22:34 yes 16:22:45 jorgem: i am! 16:22:49 sweet 16:23:05 automagically / jmccrory: any interest in your world for more robust LBaaS in OSA? 16:23:12 Yes! 16:23:18 definitely 16:23:20 cloudnull Yes, though you may not want to tie it to the neutron_agent container. Other vendor Ansible deployments of Octavia put the Octavia API, Controller worker, Health manager, and Housekeeping in their own nodes/containers. 16:23:21 woot 16:23:26 just wondering who to pester once spec is up ;) 16:23:29 That said, we may well end up using the A10 and F5 drivers 16:23:31 jorgem: just assign all the work items to automagically -- he's great 16:23:49 mhayden: haha noted good sir 16:23:55 mhayden Yes, they all run at the same time for load and HA reasons 16:24:09 cloudnull: ^^ 16:24:19 johnsom: that's fine, i just want to make sure I can setup multiple octavia conrollers across the cluster without having to do some hostname muging and the such. 16:24:50 #action jorgem/johnsom to work on a new octavia spec that is much better than mhayden's 16:24:50 No hostname muging, ha! 16:25:03 anything else on octavia for now? 16:25:23 sounds pretty straight forward unless there are more questions for johnsom and myself 16:25:39 woot 16:25:41 Please feel free to reach out to me with questions. I can also talk directly with folks/teams if you want more information about Octavia and it's current state 16:25:53 #openstack-lbaas is where we hang out 16:25:53 thanks for joining us, jorgem and johnsom! :) 16:26:01 thanks everyone! 16:26:13 o/ 16:26:19 #topic Release planning and decisions 16:26:30 so 12.2.3 and 13.3.3 came out this week 16:26:39 was there anything urgent to get into the next release? 16:26:51 i know cloudnull and odyssey4me were working with the shenanigans in the py_pkgs lookup 16:27:14 i think those are done 16:27:17 woot 16:27:32 anything else on this? 16:28:02 #topic Open floor 16:28:29 thanks to everyone who helped get the security documentation patches rolling 16:28:43 now everything is automatically generated -- much less manual labor on updating docs for controls ;) 16:29:05 and i set a (horrible) pattern for adding sphinx extensions to generate automatic docs from scripts 16:29:37 anyone have any reviews that they need help with other than the ones that cloudnull brought up earlier? 16:29:57 automagically: need assistance on the ansible 2.1.1 reviews still? 16:30:04 Do I ever 16:30:24 Looks like some of the more recent failures are due to some fact gathering not occurring for containers 16:30:42 Here’s what’s open: https://review.openstack.org/#/q/topic:bp/ansible-2-1-support+status:open 16:31:05 Anyone want to jump on os_swift, os_ironic, os_watcher, etc 16:31:07 Be my guest 16:31:22 gnocchi, neutron and tempest as well 16:31:42 i'll grab watcher for ya 16:32:02 anything else to discuss today? 16:32:15 Is barbican already available in OSA? 16:32:44 johnsom: The role has been around for awhile, not sure if anyone’s running it in production 16:33:08 I know there was some work around it when people were trying to get magnum in 16:33:13 But I don't know the state of either currently 16:33:42 Ok, thanks 16:34:34 okay, i'll close it up if there's nothing else 16:35:18 okay, thanks everyone! 16:35:21 #endmeeting