13:00:55 #startmeeting openstack-salt 13:00:56 Meeting started Tue Jun 21 13:00:55 2016 UTC and is due to finish in 60 minutes. The chair is newt_. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:00:57 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:00:59 The meeting name has been set to 'openstack_salt' 13:01:20 #topic roll call 13:01:30 hello 13:02:07 hi 13:02:29 hi majk 13:04:26 hello all 13:04:32 is Ales here? 13:04:35 what is the agenda? 13:04:44 newt can you start? 13:05:06 yes I'm waiting for the folk 13:05:22 for the Penguin especially 13:05:56 o/ 13:06:12 #topic Introduction 13:06:20 This meeting for the openstack-salt team 13:06:32 if you're interested in contributing to the discussion, please join #openstack-salt 13:06:49 #link http://eavesdrop.openstack.org/#OpenStack_Salt_Team_Meeting 13:06:57 #link https://wiki.openstack.org/wiki/Meetings/openstack-salt 13:07:09 #topic Review past action items 13:07:38 newt_ to test the new heat/vagrant provisioning 13:08:17 we have done this with Tux, I saw you entering buddy, can you tell us more about the status? 13:08:36 Of the heat/vagrant, is there any docs? 13:09:08 Heat stacks in two variants, single and cluster with parametrized bootstrap scripts are already tested for several possible variations of operating system, networking etc., Im working on documentation now 13:09:44 are there any blueprints on plan features? 13:09:50 yes 13:10:23 https://blueprints.launchpad.net/openstack-salt 13:10:27 it's according to the blueprint for orchestration automation 13:11:06 we should update status and assign specific tasks 13:11:20 newt as PTL should approve appropriate bluperints 13:11:20 #link https://blueprints.launchpad.net/openstack-salt/+spec/service-orchestration 13:12:19 the idea of this blueprint is to allow orchestration of openstack with various configuration options 13:12:37 when the docs of possible parameters is up, we're ready to go on and test it 13:12:55 great 13:13:22 then there is: marco to provide support for testing midonet setup on heat stack 13:13:59 OK can we move to topic of getting these formulas official 13:14:11 is marco around 13:14:13 do we have a list of formulas? midonet, kubernetes, swift 13:14:34 marco is offline 13:14:49 I see 13:14:52 but he finished salt-formula-midonet 13:15:28 I've heard, I was wondering how far did he get with the testing suite. 13:16:10 How do we setup process to accept a new formula to opesntack official repos? 13:16:18 marco can you provide update for midonet? 13:16:24 Shall we vote? 13:16:41 we can vote 13:17:19 midonet formula is done for kilo, salt-formula-neutron is under review 13:17:33 who is for to add midonet-formula to mainstream? 13:17:50 or has some objections? 13:18:25 +1 13:18:33 +1 13:18:50 +1 13:18:59 +1 13:19:20 +1 13:19:36 #action add midonet to openstack-salt formulas 13:19:41 #action newt add midonet to openstack-salt formulas 13:20:13 Now the swift formula, it has been tested and been running on kilo and liberty 13:20:24 regarding task , epcim to find out if our networking approach is rh7 compatible and suitable- - consistent naming interfaces on ubuntu >= 15.10 and rhel >= 7. Are we about to open this topic.. 13:20:39 i wrote, that it is ready only for kilo 13:20:51 and we need to test with rh 13:20:56 We should add midonet formula support to our OS Salt lab stacks after all necessary reviews are closed and merged 13:21:17 we can get repository under openstack official and start review procedure to be able add support for redhat 13:21:57 Tux_: agreed, but the formula will be under oficial processes when being worked on 13:22:24 we will fix all the remaining issues using reviews and openstack CI 13:22:36 epcim: hold on. We have to approve all formulas going official. 13:22:39 the question to add the swift 13:22:47 who votes for? 13:22:56 +1 13:23:02 +1 13:23:06 +1 13:23:11 +1 13:23:36 ok 13:24:04 agreed and what about kubernetes? this one is the trickiest, not being the openstack service 13:24:49 it should be there as well 13:24:55 -1 13:25:00 nothing related to openstack 13:25:08 but it is community 13:25:19 kolla-kubernetes is related to openstack? 13:25:51 jpavlik: kolla is just about kubernetes, we focus on all services in general 13:26:41 I'm hesitant, it does not belong to openstack services but it is used to run them and it would be nice if it was managed by OS CI 13:27:01 kolla is about running openstack in containers, which is part of salt-formula-kubernetes as well 13:27:17 I would like to get it out from tcpcloud namespace to be more community open solution. Alternative for kolla 13:28:03 it is same like https://github.com/openstack/fuel-plugin-saltstack 13:28:08 nothing related to openstack 13:28:10 anyone else throws a vote? 13:28:16 +1 for move, but we should not mess openstack community with non-openstack. Otherwise we can also move formulas like postfix, freeipa, jenkins, etc. 13:28:40 I would better try to push again discussion with SaltStack about making our formulas more official 13:28:49 what repo? 13:29:16 genunix: I think that kubernetes formula is not same like linux or postfix 13:29:30 I think I agree with jpavlik on this, if there are other barely related projects already, this one could really benefit the comunity 13:29:48 I see huge benefit in open CI 13:30:00 jpavlik: but you can say freeipa has some relation because you can use LDAP backend for keystone auth that it provides. 13:30:05 kubernetes should be a new mainstream for running openstack-salt 13:30:17 +1 me too 13:30:19 we can make SaltStack community CI for all of these non-openstack formulas 13:30:42 genunix: this is best way, however we do not have power to get there. 13:31:21 jpavlik: but both solutions deployment to virtuals / kubernets should coexist. As both will be used for productions.. 13:31:40 well, we agreed on midonet and swift, kubernetes is a little different, we should take care of the non-openstack formulas in consistent way 13:31:45 containerised services for OS might attract new users.. for sure.. 13:32:19 I also think Kubernetes should be an alternative, I wouldnt throw away legacy solutions just yet 13:33:23 I will talk to the infra guys about the number of repos we can have 13:33:29 I am not talking about throwing. Discussion is about get kubernetes under openstack namespace, because it is related and it can be part of future simple installer for community 13:34:11 If the limits are fine I'd consider moving all things related to running openstack with salt there. 13:34:53 this leads us to: Tux to add the SPM support to formulas and register to inverntory if there's any 13:35:06 epcim: we'll get to your issue shortly 13:35:07 to including linux and other formulas? 13:35:34 they will not approve, because it is duplication of official saltstack 13:35:35 if you look at ansible repositories under openstack-salt 13:35:46 openstackansible i ment 13:36:07 you see many non openstack related repos 13:36:16 jpavlik: no there may be as many as spm sites for individual solution 13:36:22 newt_: Im currently facing some difficulties with metadata, default metadata in formula root cannot be cleanly included to SPM package, Ill look into this more, otherwise SPM packaging metadata are already prepared for all formulas 13:36:29 tcpcloud may provid spm repo for openstack-salt formulas 13:36:56 the metadata issue, I'll update the blueprint to handle this 13:37:08 +1 to spm repo 13:37:25 +1 to spm repo 13:37:29 +1 spm 13:37:33 I was thinking the same thing +1 for spm repo 13:38:00 Tux_: can you write to salt issue? 13:39:05 that tcp can host the spm repository and setup jobs to deliver new versions there 13:39:46 so the salt community may start using the openstack-salt formulas right away 13:40:08 fine! 13:40:17 the metadata issue means the reclass classes are not available, but it is solveable 13:40:32 now epcim to find out if our networking approach is rh7 compatible and suitable 13:40:38 continue "networking approach is rh7 compatible" -> Basically you should read http://askubuntu.com/a/689143 (udev/systemd is giving the names based on multiple attributes ( mac, firmware, etc.. ) Thus on virtualization it will gent "inconsistent" as a side effect for a automation and cfg. mgmt tools. The solutions are: A, remove the persistent rules from 13:40:38 configuration on each platform (ie: ln -s /dev/null /etc/udev/rules.d/80-net-name-slot.rules; Pass the `net.ifnames=0` to kernel.) B, to write custom naaming (internet0, public0) rules and still do (A); C, avoid using a interface names (as they are not important) and introduce new attributes (private_ip, public_ip, ...), that will be shared in grains etc.. 13:40:38 (example, as OHAI does for cloud ISVs: https://github.com/chef/ohai/blob/master/lib/ohai/plugins/cloud.rb) 13:40:50 pls have a quick look on the links 13:40:59 newt_: Yeah I would like your support with this issue, I didnt come up with any good solution yet 13:41:22 newt_: At the moment Im unable to deliver service class with SPM package 13:42:48 which is expected 13:42:50 epcim: we can setup kernel params by states, the default interface names can be map.jinja based? 13:42:58 I am for (C) as it's pluggable per environment.. 13:43:57 i see the issue with (B) that may not fit the other customers 13:44:06 but I think we'll need to go in more detail it is too much info for me to make some decision :) 13:44:17 I do not understand what is the problem. You tested ubuntu 16.04 and hit issue with networking? 13:44:39 as well as on some platforms (not sure but ec2, docker) you may not be given the options to modify interface names 13:45:27 the issue is that since 16.04 and rhel 7 the intefaces has names like: ens6s0 13:45:40 can we discuss this later on irc channel? because this is very deep dive 13:45:52 and bring resolution on next irc meeting? 13:46:00 this about the past meeting issues 13:46:00 re, first link. based on naming sheme used in rules it may contain part of mac/firmware etc.. 13:46:01 I need to understand it in more detail 13:46:19 We are not targeting interface by specific names 13:46:24 it is different per deployment 13:46:31 The following different naming schemes for network interfaces are now supported by udev natively: 13:46:32 1) Names incorporating Firmware/BIOS provided index numbers for on-board devices (example: eno1) 13:46:32 2) Names incorporating Firmware/BIOS provided PCI Express hotplug slot index numbers (example: ens1) 13:46:32 3) Names incorporating physical/geographical location of the connector of the hardware (example: enp2s0) 13:46:32 4) Names incorporating the interfaces's MAC address (example: enx78e7d1ea46da) 13:46:32 5) Classic, unpredictable kernel-native ethX naming (example: eth0) - depreciated 13:46:47 but this is not a problem 13:46:55 reclass salt models they does... 13:47:07 reclass must be fit to every physical server 13:47:20 you cannot predict anything, even if you use mac address someone has to set it manually 13:47:33 but that is changing with containers for example 13:47:47 we do not manage interfaces inside of container 13:47:52 you do not care about ip in container 13:47:55 yes 13:48:04 there is not interface management 13:48:08 no network management if the model is note set 13:48:20 in formulas you want to bind services to particular subnets (so you are aquire them by interface name "today") 13:48:26 + for these purposes (determine interface, mac addr, whatever), there are grains 13:48:40 we bind on ip address not interface 13:48:45 0.0.0.0 or vip address 13:48:49 (example, as OHAI does for cloud ISVs: https://github.com/chef/ohai/blob/master/lib/ohai/plugins/cloud.rb) 13:48:51 or single local address 13:49:45 I suggest to discuss this individualy 13:49:56 epcim: can you provide us with specific example of state that uses interface names directly? I don't know wheter we use this at the moment or not 13:49:59 grains shoud return 'private_ipv4' of node (what ever interface). Ohai for example is checking on what interface is default route to return these entries - it's more complex than just map interfaces. 13:50:22 we can introduce grain that will determine and provide that information 13:50:29 if needed 13:50:37 but there is no need to do that 13:50:41 I need to know use case 13:50:53 because until now I do not care about naming on interfaces 13:50:54 genunix: will we manage to do that somehow pluggable (so others, using the formulas may modify?) 13:50:58 only in case of vrouter 13:51:20 ok, let's move to the today's workload, this discussion is getting some friction :) 13:51:57 Tux_: grep them (it's general issue for future compat). 13:52:10 #topic Today's Agenda 13:52:30 today's agenda is about the past agenda, but I'll try to summarise: 13:52:46 get the midonet, swift formulas in infra repo 13:53:33 get the test suite documented and used for midonet and dvr in 1st wave 13:54:01 get the spm to formulas and repo ready for all formulas 13:54:21 that's it for recapitulation of our tasks 13:54:22 now 13:54:23 #topic Open Discussion 13:55:10 OK, lets discuss this on irc 13:55:26 thanks everybody 13:55:55 does anyone else have something on their mind? 13:57:41 we'll it's all gentlemen 13:57:55 have a nice rest of day 13:57:58 #endmeeting