16:01:21 <sdake> #startmeeting kolla 16:01:22 <openstack> Meeting started Wed Aug 10 16:01:21 2016 UTC and is due to finish in 60 minutes. The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:23 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:25 <sdake> #topic rollcall 16:01:26 <openstack> The meeting name has been set to 'kolla' 16:01:27 <duonghq> o/ 16:01:30 <inc0> o/ 16:01:34 <coolsvap> o/ 16:01:35 <berendt> o/ 16:01:50 <janki> o/ 16:01:53 <pbourke> o/ 16:01:54 <mandre> hi! 16:01:56 <Jeffrey4l> \0/ 16:02:03 <janki> Hi, this is my first meeting here 16:02:13 <sdake> jenki welcome aboard! 16:02:13 <inc0> welcome janki :) 16:02:24 <duonghq> welcome janki 16:02:33 <srwilkers> o/ 16:02:35 <berendt> janki welcome, it's my 2nd one, don't worry 16:02:51 <janki> thank you sdake duonghq inc0 berendt 16:02:59 <coolsvap> janki: o/ berendt ;) 16:03:07 <janki> excited :) 16:03:17 <duonghq> berendt: it is my 3rd iirc 16:03:18 <sdake> well lets rolling 16:03:28 <sdake> #topic annuncements 16:03:30 <vhosakot> o/ 16:03:32 <sdake> #undo 16:03:33 <openstack> Removing item from minutes: <ircmeeting.items.Topic object at 0x7f5bbf31eb50> 16:03:40 <sdake> #topic announcements 16:04:07 <sdake> osic cluster is on scenario #3 of 11 or 12 or whatever it is we have :) 16:04:12 <sdake> work progressing nicely 16:04:25 <sdake> gaterhing tmepest and rally data 16:04:27 <rhallisey> hi 16:04:44 <sdake> if you want to help please contact inc0 16:04:52 <sdake> he will get you ramped up 16:05:04 <sdake> any other announcements from community folks? 16:05:07 <inc0> first 2 deployments gave us incredible results 16:05:24 <inc0> 130node deployments took little over 20min 16:05:38 <duonghq> just a ref link for newcomer: https://etherpad.openstack.org/p/kolla-N-midcycle-osic 16:05:40 <inc0> so please, one woot for Kolla;) 16:05:47 <rhallisey> woot 16:05:52 <rhallisey> there you are 16:05:52 <sdake> woot woot 16:05:55 <sdake> how about two ;) 16:06:15 <mandre> this is awesome 16:06:15 <inc0> that's it from me, can we have osic cluster item tho? 16:06:15 <inc0> agenda item 16:06:17 <coolsvap> woot (y) 16:07:21 <duonghq> hi sean-k-mooney 16:07:25 * coolsvap is wounded today pls don't expect immediate replies 16:07:32 <lrensing_> o/ 16:07:34 <sean-k-mooney> o/ 16:07:46 <sdake> inc0 its already in the agenda here: https://wiki.openstack.org/wiki/Meetings/Kolla 16:07:55 <inc0> yay 16:07:59 <sdake> any other announcements 16:08:22 <sdake> wow lots of kolla cats today 16:08:40 <sdake> #topic OSIC scale testing 16:08:55 <sdake> i asked pbourke for an update before he departed to a bender 16:09:04 <sdake> the results are that we are ready for #3 test case 16:09:14 <sdake> however, I noticed that I am unable to ssh into the VMs 16:09:19 <sdake> maybe I am just doing it wrong 16:09:40 <sdake> inc0 can you help get that straigthened out, I wantto run bonnie++ on the scenario 2 16:09:42 <sdake> (today) 16:09:48 <inc0> sure 16:10:00 <inc0> my question tho to pbourke and coolsvap 16:10:02 <sdake> see what kind of perf ceph gifves at 10gig 16:10:04 <pbourke> im wondering should we spend a few minutes making sure we're happy with the test scenarios 16:10:09 <inc0> did you guys write down testing methodology? 16:10:11 <vhosakot> sdake: inc0: I need help to pick items for OSIC cluster in https://etherpad.openstack.org/p/kolla-N-midcycle-osic.. will ping in kolla channel after meeting 16:10:16 <inc0> how did you setup tempest and stuff 16:10:20 <inc0> sure vho 16:10:25 <inc0> vhosakot, 16:10:29 <vhosakot> I got the VPN, ssh, tmux part working 16:10:30 <rhallisey> ya I wanted to try aswell 16:10:34 <sdake> vhosakot i think inc0 will hep you with that 16:10:34 <rhallisey> just swamped 16:10:42 <vhosakot> sdake: cool... 16:10:53 <sdake> pbourke sounds reasonable - lets not second guess ourselves though 16:10:56 <vhosakot> I will start doing more reviews as well 16:11:04 <sdake> pbourke happen to have a link - agai nbookmarks are still busted 16:11:12 <pbourke> https://etherpad.openstack.org/p/kolla-N-midcycle-osic 16:11:14 <inc0> we need to figure out centos with cobbler tho 16:11:24 <inc0> who feels up to it? 16:11:34 <pbourke> basically I want some more detail under each scenario other than "run rally+tempest" 16:11:47 <pbourke> inc0: why do we need that 16:11:48 <sdake> inc0 did the timeframe change on the osic cluser from 4 to 3 weeks? 16:12:01 <inc0> pbourke, to test out centos-source and binary as well 16:12:12 <sdake> agree centos is good thing to test 16:12:13 <pbourke> inc0: we cold deploy those on top of ubuntu 16:12:17 <inc0> sdake, it did, we still have 3 weeks left 16:12:31 <inc0> pbourke, I had problems with centos on top of ubuntu14.04 16:12:42 <inc0> ubuntu 16.04 worked...systemd issues 16:12:55 <pbourke> ok we can do that 16:13:10 <inc0> either way we need systemd-based image there 16:13:21 <sdake> ya lets use centos for that 16:13:21 <Jeffrey4l> inc0, what issue when using centos on top of ubuntu 14.04? 16:13:29 <pbourke> also need more volunteers as a lot who originally volunteered are busy atm 16:13:42 <inc0> Jeffrey4l, I couldn't even build it couple months ago 16:13:42 <sdake> ya we hae gaps in teh us 16:13:49 <sdake> vhosakot is part of the oslution there 16:14:02 <inc0> but we can try to deploy it, just don't want to spend too much time fixing issues from cross-distro 16:14:05 <berendt> pbourke busy this week, I can try to assist nex week 16:14:10 <Jeffrey4l> OK. 16:14:31 <janki> pbourke: I can assisst. need a background though 16:14:40 <sdake> we have more people capacity next week - britthouser is back from pto, and berendt can hep 16:14:46 <inc0> janki, we'll talk later on #openstack-kolla 16:14:58 <janki> inc0: sure 16:15:01 <sdake> this week is almsot up 16:15:02 <inc0> I'll be back from my travels too 16:15:10 <berendt> sdake yes, but not as a FTE, only a few hours, business has prio 16:15:12 <sdake> not sure what can be done this week, othe rthen to bring vhosakot up to speed 16:15:16 <rhallisey> inc0, was going to try this weekend 16:15:17 <pbourke> wrt, I just want to know we're in agreement with what we expect to learn from each scenario 16:15:20 <sdake> berendt roger 16:15:24 <inc0> so let's get done as much as we can now and sprint for next 2 weeks 16:15:34 <pbourke> for example, https://review.openstack.org/#/c/352101/9/doc/verification-results-scenario-2-benchmark-boot-and-delete-instances.json 16:15:36 <vhosakot> sdake: yes, I'll work with inc0 and start with osic scaling 16:15:37 <pbourke> is this useful to people? 16:15:40 <sdake> pbourke at this point we are gathering data 16:15:44 <sdake> tempest output is useful 16:15:48 <sdake> rally output is useful 16:15:52 <sdake> characterization is useful 16:16:05 <inc0> we can process and filter out data later on 16:16:14 <coolsvap> pbourke: its just the json 16:16:15 <inc0> right now let's gather as much as we can 16:16:17 <berendt> inc0 +1 16:16:18 <sdake> we will conver tthe json to rst later when we dont have the cluster 16:16:18 <pbourke> i only see nova boot+delete 16:16:22 <pbourke> there must be more rally can do? 16:16:25 <coolsvap> I have more results in html format 16:16:28 <inc0> pbourke, lot more 16:16:31 <sdake> pbourke i had that same question 16:16:43 <sdake> coolsvap lhtml is not useful 16:16:49 <sdake> we need rst, because rst can convert to any format 16:16:54 <sdake> html on the other hand, not so much 16:16:56 <coolsvap> pbourke: rally needs some love 16:17:03 <sdake> json can convert to rst 16:17:08 <berendt> where can I find the rally tests used at the moment? 16:17:11 <sdake> so please fokls when running rally and tempest, output in json 16:17:18 <Jeffrey4l> i think we should the raw tempest/rally test result. 16:17:30 <Jeffrey4l> we can convert it into any format when needed. 16:17:41 <berendt> Jeffrey4l raw format of Rally is JSON 16:17:44 <sdake> Jeffrey4l json will be easier to convert to rst via pandoc 16:17:52 <Jeffrey4l> tempest is not. 16:17:59 <sdake> tempest can output in json 16:18:18 <inc0> so we need to filter out data too, so we'll get the important numbers, too much data is horrible for readibility 16:18:20 <sdake> the idea is we output in json, and figure out hwo to format it after we lose access to the cluste rresources 16:18:26 <berendt> Can somebody add the used tests to https://review.openstack.org/#/c/352101/ ? 16:18:36 <berendt> sdake +1 16:18:42 <inc0> +1 berendt , and commands to run them 16:18:49 <sean-k-mooney> do we have a list of all rally and tempest test that are being run. e.g. is it all the test or just some? 16:18:50 <inc0> so we'll have stable testing methodology 16:18:54 <sdake> back on pbourke 's topic, if rally can do many more tests beyond boot/delete, why aren't we oding those 16:19:20 <sdake> i dont nkwo anthing aobut rally 16:19:23 <sdake> is it a misconfig usse? 16:19:25 <coolsvap> rally can do a lot more 16:19:31 <sdake> lets grab as much data as we can 16:19:46 <sdake> ok so scenario #2 needs to be finished then with prope rally output 16:19:53 <coolsvap> like I said today, we have ceph but no cinder endpoint 16:20:06 <berendt> Who will add the rally tests to the review? 16:20:10 <sdake> will rallly not run without cinder? 16:20:22 <inc0> but we do want cinder;) 16:20:23 <berendt> Rally can be run independent 16:20:27 <coolsvap> it will not be able to run the nova + cinder scenarios 16:20:40 <inc0> coolsvap, let's redeploy ceph+cinder plz 16:20:40 <sdake> berendt the person doing the tests pulls the review, adds the results, git reviews 16:20:51 <inc0> and do full package of scenerios 16:20:56 <berendt> the results are useless without the used test files 16:21:17 <coolsvap> inc0: ack 16:21:32 <sdake> berendt could you expand - i dont understnad what you mean by used tet files 16:21:45 <inc0> will get on that after meeting, I'll make hangouts 16:21:48 <berendt> rally used test scenario files, those files are missing at the moment 16:22:17 <sean-k-mooney> berendt: these files https://github.com/openstack/rally/tree/master/rally-jobs 16:22:33 <coolsvap> berendt: the files are on available 16:22:35 <berendt> sean-k-mooney do we not have our own test set? 16:22:58 <coolsvap> berendt: I am using the sample scenarios and we can expand on that 16:23:09 <sean-k-mooney> berendt: not that im aware of. 16:23:13 <berendt> normally it is required to tweak the parameters because the parameters used in the samples are pretty low 16:24:09 <coolsvap> berendt: yes 16:24:14 <sdake> berendt are you indicating that the defautls in rally are not useful? 16:24:18 <berendt> e.g. the boot server scenario in nova.yml boots 2 instances, this is pretty useless when working with 130 compute nodes 16:24:35 <berendt> sdake they are useful for a devstack environment, not for a big environment 16:24:54 <sdake> berendt how big of job is it to crank out a rally parameter list 16:25:31 <coolsvap> earlier they were not even getting started, we had an issue with image, now I have started tweaking those and got to the cinder/ceph issue 16:25:39 <berendt> sdake I do not know, I think each used test scenario should be checked if it makes sense or not 16:25:54 <inc0> let's put that into review 16:25:57 <inc0> I mean list of tests 16:26:03 <inc0> and we'll comment on them 16:26:03 <sean-k-mooney> berendt: there are quite a lot https://github.com/openstack/rally/tree/master/samples/tasks/scenarios 16:26:09 <Jeffrey4l> we may need: 1) boot 50 vms 2) boot 100vm 3) boot 500 vms 16:26:10 <sdake> inc0 it s already there 16:26:27 <berendt> sean-k-mooney it only about the parameters, they a very low 16:26:27 <coolsvap> Jeffrey4l: +1 16:26:28 <sdake> Jeffrey4l good call 16:26:28 <inc0> Jeffrey4l, we're looking at 2k+ vm space 16:26:29 <inc0> even more 16:26:40 <sdake> we can specual case the ones that we think are important 16:26:47 <berendt> we need to boot more than 5k instances on such a big cluster to have a realistic load 16:26:49 <sdake> but really need someone to do this work - and do it now (as in today) 16:26:52 <Jeffrey4l> sean-k-mooney, these are still very low and useless. 16:26:52 <sdake> as it becomes a blcoker 16:26:56 <inc0> if we use small flavor we can make a 10k:) 16:27:10 <sdake> the alternative is to eject rally entirely rom our plans 16:27:13 <coolsvap> sdake: I can do that 16:27:28 <coolsvap> can we do the cinder+ ceph later? 16:27:33 <Jeffrey4l> inc0, these's not possible. rabbitmq will crash. boot 500 vms mean boot 500 vms in one time. 16:27:38 <berendt> sdake no, we should use rally, i think skipping it should not be an option 16:27:49 <Jeffrey4l> rabbitmq can not support 10k now. 16:27:54 <sdake> berendt i was presenting two options not one :) 16:28:06 <berendt> Jeffrey4l this can be defined, you can start 10 instances, 10 instances, and so on, rally is very flexbile 16:28:08 <inc0> let's push it to the limit of breaking tho, will be interesting to see 16:28:18 <Jeffrey4l> berendt, yes it is. 16:28:25 <sdake> lets push to limits once we get our basics down inc0 16:28:25 <pbourke> coolsvap: I can deploy cinder in the morning 16:28:35 <pbourke> getting rally sorted is the crucial part I think 16:28:39 <inc0> pbourke, let's do that 16:28:40 <sdake> those can be further scenarios we add, if we have time 16:28:41 <Jeffrey4l> but the really use test is: booting as much as machine in one time. 16:28:45 <inc0> you and me deploy cinder and ceph 16:28:56 <pbourke> inc0: I have only about 35 mins left in my day 16:28:59 <coolsvap> so I will start the jobs tonight 16:29:06 <inc0> should be almost 2 deployments;) 16:29:10 <sdake> coolsvap cool - so you take on sorting out a sanitary list of rally scenario tests 16:29:13 <inc0> I'll continue if we fail 16:29:19 <pbourke> inc0: ok 16:29:27 <coolsvap> sdake: i will do 16:29:38 <inc0> btw I'd love to see full time from bare metal to OS 16:29:43 <sdake> coolsvap need it by tomorrow morning us time so pbourke hsa something to work with 16:29:44 <pbourke> getting rally documented would be great 16:29:45 <inc0> so including kolla host playbook 16:29:52 <rhallisey> ya that would be cool 16:29:53 <sdake> inc0 yes that is last up 16:29:56 <berendt> pbourke I can help with this tomorrow 16:30:08 <coolsvap> pbourke: ack you will have both 16:30:19 <Jeffrey4l> we need push the rally test jobs to the PS. we can share them in different test senario 16:30:31 <sdake> Jeffrey4l sounds good 16:30:55 <sdake> ok anything further ont his topic? 16:32:13 <pbourke> I think we're on track :) 16:32:18 <sdake> sounds good 16:32:51 <sdake> #topic newton 3 16:33:10 <sdake> A - U - G - U - S - T - 3 - 1 16:33:30 <sdake> #link #link https://launchpad.net/kolla/+milestone/newton-3 16:33:47 <sdake> as you can see we hae a bunch of work in progress 16:33:59 <sdake> anything not in good progress by the 15-20 is getting booted to ocata 16:34:31 <sdake> please update blueprints if you feel your work is in good progress 16:34:44 <sdake> good progress is defined as _will finish by august 31_ 16:35:00 <berendt> sdake do we have a list with priorities? 16:35:17 <sdake> berendt we do, but my bookmarks are not functional 16:35:23 <sdake> can omeone link form midcycle? 16:35:53 <Jeffrey4l> https://etherpad.openstack.org/p/kolla-N-midcycle-priority 16:36:22 <pbourke> inc0: on topic of newton, can you add https://review.openstack.org/#/c/337594/6/ansible/group_vars/all.yml to your long todo list to have another look 16:36:43 <pbourke> others are free to review also. that review can close a blueprint quick enough 16:36:57 <sdake> speaking of reviews 16:37:05 <sdake> the review queue is long and torturious 16:37:12 <sdake> lots of 50 file reviews in there 16:37:22 <sdake> we need to get feedback going on those 16:37:28 <sdake> so they land for milestone #3 16:37:38 <berendt> Can we make a priority list right now? I do not unterstand the linked ehterpad. 16:37:51 <sdake> i'd like folks working on osic to split their time between osic and the review queue 16:38:03 <sdake> berendt - lower the number - higher the priority 16:38:15 <pbourke> did you guys use some crazy counting again :p 16:38:40 <sdake> each # represents a vote from a midcycle attenddee 16:38:44 <berendt> pbourke I think so :) 16:38:47 <sdake> crazy is my middle name 16:38:53 <sdake> (actually its charles, but its close) 16:38:58 <inc0> pbourke, 9.2;) 16:39:14 <egonzalez90> I will focus on reviews 16:39:21 <sdake> here is the bottom line folks 16:39:24 <sdake> if we crank out the reviews 16:39:28 <sdake> and get them all merged 16:39:40 <sdake> all of our priorities for newton will be met 16:39:41 <berendt> sdake I want to have a list with concrete and realistic priorities (e.g. first close customisation, then close gnocchi integration, ...) 16:39:47 <sdake> but i dont see much review cranking 16:40:09 <sdake> berendt - i can help you after meeting parse that file 16:40:14 <berendt> sdake ok 16:40:20 <sdake> i dont want to rehash our prioorities in anothe rsession 16:40:23 <sdake> that culd take hours 16:40:27 <sdake> it di the last time :) 16:41:10 <sdake> so please hit the review queue - lets make the code quality good - so that we dont have a bajillion bugs to fix in the rcs 16:41:40 <sdake> 1-2 hours a day from cor ereviewers should get our review queue under control 16:41:47 <sdake> 3-4 hours should make it empty 16:42:10 <sdake> any Q&A? 16:43:04 <sdake> #topic kolla-kubernetes 16:43:23 <rhallisey> lots of progress on the neutron/nova front 16:43:47 <rhallisey> the goal for the community is to put together a demo when that work completes 16:44:21 <rhallisey> one of the main issues we're running into is the repo/config split 16:44:31 <rhallisey> and adding temporary fixes into kolla 16:44:50 <sdake> rhallisey i hear ya- going to have to wait until milestone #3 + some time after, or hack around it 16:45:17 <rhallisey> yup we can bring it up aagain as we get closer to O 16:45:39 <rhallisey> any other news from the community here? 16:46:08 <sdake> say we have a few new cats in the channel 16:46:16 <sdake> rather meeting 16:46:40 <rhallisey> ok, nice job everyone :). Let's keep up the pressure here. The solution we have going at the moment it really god 16:46:42 <rhallisey> good 16:46:47 <sdake> if your interestd in getting in on ground floor of a fresh new development effort, kolla-kubernetes is where to make an impact, and i'm sure it will be big 16:46:49 <berendt> Can we talk about sapcc/openstack-kube? 16:46:50 <srwilkers> a colleague and i have been looking at the heat kubernetes blueprint, and we think we have a solid understanding of what we need to do. we’re still running into a few issues with the quickstart guide as this is new to us. we’re fairly positive our work shouldnt take long once we resolve that 16:47:33 <sdake> if your interested in furthering the lead of kolla, kolla-ansible is where to make an impact 16:47:34 <rhallisey> srwilkers, ok. Ask away in openstack-kolla and we can help 16:47:36 <rhallisey> berendt, sure 16:48:23 <berendt> How to proceed with this subject? I actually do not understand the existence of sapcc/openstack-kube. They are using Kolla code and do not mention Kolla, do not provide a LICENSE file, ... 16:48:23 <rhallisey> there's other implementations out there for kubernetes 16:49:00 <rhallisey> berendt, it's a specific sap solution 16:49:04 <rhallisey> kolla-kube is a community for broad deployments 16:49:23 <rhallisey> it would be awesome if they would like to join the community, but it's up to them 16:49:33 <inc0> berendt, so they just shared what they made in house 16:49:37 <inc0> we talked with them on summit 16:49:46 <sdake> yup we are open to a merger of ideas and engineering there 16:49:54 <sdake> berendt what else did you want to know 16:50:06 <inc0> some of ideas we have in kolla-k8s are theirs 16:50:15 <rhallisey> berendt, ya there are lots of in house deployment of kubernetes out there. It would be nice to merge them into one project in the openstack namespace 16:50:35 <berendt> It is fine for me, I just do not understand why they do not support the existing projects. 16:50:40 <rhallisey> best way to do that is to continue to build community and build the project 16:51:22 <inc0> berendt, they started before summit 16:51:30 <inc0> so there was no existing project yet 16:51:34 <sdake> berendt who knows - pepole go their own way for a variety of reasons - we hope to get to a good workign relationship with these folks 16:51:38 <inc0> and I can't say they didn't support us:) 16:51:53 <rhallisey> the first I heard of this was mid February 16:52:09 <rhallisey> a few weeks after I posted the spec 16:52:17 <rhallisey> but we were in no shape to start working on it 16:52:29 <inc0> imho they shared enough knowledge and ideas that influenced kolla-k8s that they should be listed as co-authors of it;) 16:52:57 <sdake> yup who knows what the beef was about 16:52:59 <sdake> we will sort it out 16:53:05 <sdake> it will take time 16:53:13 <sdake> and may or may not have a good outcome fo rthe community 16:53:20 <sdake> (the openstack community) 16:53:21 <rhallisey> ya we'll see 16:53:38 <rhallisey> k all set for kolla-kube news 16:53:40 <sdake> future hard to predict - filled with emotions the future is :) 16:53:47 <rhallisey> just keep up the good work :) 16:53:54 <duonghq> I think one of the bigest problems it that they have not had any licensing information. 16:54:04 <sdake> Jeffrey4l https://review.openstack.org/306928 16:54:11 <sdake> #topic open discussion 16:54:20 <sdake> Jeffrey4l wan'ted some discussion o nthis 16:54:25 <sdake> Jeffrey4l you hae the floor fine sir 16:54:26 <Jeffrey4l> this is a easy. 16:54:36 <Jeffrey4l> this is a new feature for kolla configuration file. 16:54:39 <berendt> duonghq Jup, this should be fixed to avoid legal issues. 16:54:56 <Jeffrey4l> we talked this before. first found win may be a better solution. 16:55:33 <Jeffrey4l> This one need eyes. If we approve it, we can add such kind of feature for other non-ini format configuration file. 16:56:01 <Jeffrey4l> this will make Kolla more flexible 16:56:13 <pbourke> the issue of non-ini config customisation has been around for quite a while 16:56:20 <pbourke> this is the cleanest implementation I've seen 16:56:29 <Jeffrey4l> yes 16:56:57 <pbourke> the only issue is if we ship critical fixes in our templates operators using this feature will miss out 16:57:06 <Jeffrey4l> Just throw this to the meeting. Let's talk the detail in the pS> 16:57:58 <Jeffrey4l> another one I want to talk is https://review.openstack.org/352089 16:58:03 <sean-k-mooney> Jeffrey4l: actully as part of the bifrost work i had to introduce a merge_yaml plugin. 16:58:19 <Jeffrey4l> sean-k-mooney, that's cool. 16:58:36 <Jeffrey4l> first found win can handle these files which hard to merge. 16:58:49 <Jeffrey4l> like apache configuration file. 16:58:59 <berendt> Let's move to #openstack-kolla, we are nearly out of time 16:59:08 <Jeffrey4l> OK. 16:59:10 <duonghq> +1 berendt 16:59:17 <sdake> thanks folks 16:59:25 <sdake> ew can overflow into #openstack-kolla 16:59:28 <sdake> #endmeeting