14:01:23 #startmeeting interop_challenge 14:01:24 Meeting started Wed Mar 22 14:01:23 2017 UTC and is due to finish in 60 minutes. The chair is tongli. Information about MeetBot at http://wiki.debian.org/MeetBot. 14:01:25 tongli: it gives me headaches just thinking about that xD 14:01:25 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 14:01:28 The meeting name has been set to 'interop_challenge' 14:02:02 can you guys do an o/ again so that the meeting minutes get your name recorded? 14:02:14 o/ 14:02:20 o/ 14:02:21 o/ 14:02:21 o/ 14:02:31 o/ 14:02:36 o/ 14:02:42 Thanks. 14:02:48 o/ 14:03:00 topic Review last meeting action items 14:03:09 o/ 14:03:10 #topic Review last meeting action items 14:03:32 http://eavesdrop.openstack.org/meetings/interop_challenge/2017/interop_challenge.2017-03-15-14.01.html 14:04:05 there are only two actions other than review patch sets from last week. 14:04:29 BTW, Brad is in Vegas, he most likely will miss today's meeting. 14:04:45 o/ 14:04:50 welcome vkmc ;) 14:04:52 action #1 was Mark look into the copy right, license thing. 14:04:57 thx! 14:05:06 o/ 14:05:07 @markvoelker, any updates? 14:05:50 I haven't quite finnished the patch, but it's mostly just elbow grease. Should have that ready soon (today I'm pretty booked, so likely later this week) 14:06:27 @markvoelker, great. thanks. so you are planning to add headers to workload files or creating a new file? 14:06:54 My current iteration ammends existing files per the guidelines we discussed last week 14:07:13 so your patch will be on top of my patch? 14:07:21 yep, IIRC that's what we agreed to and was on the guidelines 14:07:24 yep, just an iteration 14:07:31 it should be fairly simple, in any case 14:07:37 ok. great. Thanks @markvoelker. 14:07:40 I thought the easiest way to do it would be to make it a separate patch that's dependant on yours 14:07:40 just adding a ton of #Apache, I assume ;) 14:07:47 That way we can keep them separate in gerrit 14:07:54 dmellado: basically, yes. =) 14:08:03 @markvoelker, agreed. 14:08:27 so please, please review this patch set. https://review.openstack.org/#/c/433874/ 14:08:50 ok, the second action was for daniel look at the os-client-config. 14:09:04 tongli: I was overcommited this week, but it's on my TODO ;) 14:09:09 I think the discussion was during the NFV workload configuration 14:09:40 also if there aren't any more items I wanted to start a discussion on wether we should try to put even if a reduced version 14:09:43 of the workload 14:09:58 ok. please work with @HeleYao on the NFV workload regarding the configuration. 14:09:59 so we can add it to the openstack CI, to have some kind of 'minimum-workload' around 14:10:09 will do ;) 14:10:16 @dmellado, thanks. 14:10:37 I believe that is all the actions from last week, but guys, please review the workload patches. 14:10:44 we really need to get them merged. 14:11:06 ++ 14:11:13 #topic patches that need reviews. 14:11:31 https://review.openstack.org/#/q/status:open+project:openstack/interop-workloads,n,z 14:11:44 we only have two. 14:12:02 please review and we need to at least have the k8s workload merged. 14:12:18 #topic k8s patch set issues. 14:12:43 Earlier I had some issues when I run the workload using ubuntu image, 14:13:06 tongli: the k8s one? 14:13:16 oh, nevermind, saw the topic 14:13:19 later I found I was misconfigured docker on ubuntu 14:13:26 that has been now fixed. 14:14:06 I had another issue earlier as well, after I deploy cockroachdb cluster, I found that if I use even number of nodes for cockroachdb, 14:14:23 the cluster won't be very stable, sometimes I see pod craches. 14:14:41 but when I use odd number of nodes for cockroachdb cluster, then it has no issues. 14:14:55 I think I will need more tests on this. 14:15:03 cluster quorum stuff? 14:15:17 YEah, pretty typical for distributed systems. 14:15:19 tongli: did that happen all the times? 14:15:23 be aware that cockroachdb docker image is Alpha. 14:15:32 I mean, all time, even, crashed 14:15:36 odd, worked all the time? 14:15:44 @dmellado, not all the time even with even number of cockroachdb nodes. 14:15:51 i remember once read their blog about the scalability issue 14:15:52 hm 14:15:56 yes, odd number of nodes worked fine. 14:16:01 very stable. 14:16:14 so it will be very nice if everybody run the workload and see the behaviors. 14:16:24 please do read the README.rst file. 14:16:41 if stack_size is set to 5, cockroachdb cluster will be 4 nodes. 14:16:59 +1, will try to do some tests 14:17:08 tongli: so we need to set to 6? 14:17:11 Even numbers of nodes in a distributed DB are almost always a bad idea. =) 14:17:17 one node is used as master node, cockroachdb cluster won't use that node. so we end up with stack_size -1 cockroachdb nodes. 14:17:44 @markvoelker, that is what I suspected, I think cockroachdb master node election may have some issues. 14:17:50 but in theory it should work. 14:18:12 but anyway, at patch set #37, I have everything working. 14:18:23 and I am planning to give a demo next week to Mark. 14:18:35 if the time is confirmed, I invite everybody to participate. 14:19:10 this will be a remote meeting with Mark to show what has been developed, 14:19:15 tongli: pls do send an invite by then ;) 14:19:20 so he may have some suggestions. 14:19:25 * markvoelker notes for posterity that tongli is talking about the other Mark (aka Sparky Collier) 14:19:30 @dmellado, yes I will send out invite. 14:19:40 markvoelker: thanks for the clarification xD 14:19:51 @markvoelker, yes, sorry, Mark Collier from the foundation. 14:20:29 @markvoelker, I will show it to Brad Friday since he has been traveling so much lately. 14:20:51 o/ 14:20:59 at the moment, the k8s workload is done. 14:21:12 it works on both ubuntu and coreos images. 14:21:13 is there any docs so that we can try it on our existing clouds 14:21:21 just to have an idea 14:21:38 mnaser: it should mostly work as it is 14:21:44 @mnaser, yes, please read the README.rst file. 14:21:45 just download tongli 's patch 14:21:53 may i ask for the link to the repos 14:22:02 https://review.openstack.org/#/c/433874/37/workloads/ansible/shade/k8s/README.md 14:22:18 https://github.com/openstack/interop-workloads 14:22:21 mnaser: ^^ 14:22:22 merci tongli and dmellado 14:22:25 + pls tongli 's patch 14:22:28 that is the readme. the patch link is above. I post here again. https://review.openstack.org/#/c/433874/ 14:22:39 you're welcome mnaser 14:23:23 the patch set is now at #37, I am a bit tired keeping adding to it. please guys get it merged. 14:23:52 if there are bugs (I am pretty sure it does), we can fix it. 14:24:03 +1, tongli I'll give it a spin locally 14:24:18 and if it's ok I'd say we can merge it and have it as a starting point at least 14:24:23 I certainly do not want to be the champion in terms of number of patch set. 14:24:32 heh 14:25:14 also, guys I would like everybody know that If I have stack_size set to 6, the workload finishes within 6 minutes. 14:25:23 +1 tongli 14:25:30 ive done a lot of ansible work so 14:25:38 i can throw in patches which could speed/clean things up 14:25:40 I have also made changes that does docker load container images. 14:26:01 i agree with merging stuff so that we can tweak it as we go with smaller patches, it makes review easier 14:26:14 so that you do not have to wait for the docker images to be extracted from container image repo especially the cockroachdb image is quite big, 14:26:15 no one wants to go through a big merge :) (sorry you had to go through that tongli ) 14:26:35 mnaser: feel free to propose any change you might want to ;) 14:26:50 i would like to do so but ideally once that big patch merges :) 14:27:18 worst case you can just do as markvoelker did and make your change dependant on tongli's 14:27:32 #action, tong schedules a demo meeting with Mark Collier next week and send out invite to everybody. 14:27:49 +1 to smaller patches 14:28:22 BTW, I have also requested to have a OSCI account open, so that I can run the workload against more clouds. 14:28:39 thanks to @luzC for making it happen. 14:28:56 you mean osic cloud tongli 14:29:08 @luzC, yes, osic cloud 14:29:34 ahh yes... we have a project for the challenge... 14:29:36 :) 14:29:40 I can log in to it, but can not create router, which needs to be resolved. 14:30:09 having that on OSIC would be cool ;) 14:30:22 oh, huge updates from Interop Challenge China Chapter. 14:30:31 but tongli luzC actually (and maybe we can discuss that in person at boston for future work) 14:30:36 we'd *need* some kind of CI 14:30:51 I don't think that it's good at a long term to have that 14:30:59 'it runs on my cloud' review approach :\ 14:31:09 there was a meeting last night and decided to use improved LAMPStack work for the keynote demo in BJ in April Global OpenSource Summit. 14:31:13 dmellado I agree 14:31:14 otherwise I'll go to Boston with the 'it works on devstack t-shirt' 14:31:32 @dmellado, haha. 14:31:38 hahaha yes, that would be amazing 14:31:44 so we would need to have maybe a subset 14:31:51 that would fit openstack CI requirements 14:32:02 @dmellado, can you help with that? 14:32:20 I can help, but won't be able to handle that myself 14:32:33 I'll try to coordinate that, though 14:32:44 I tried squeezing everything on a 8GB dsvm 14:32:48 and it was a disaster 14:32:58 @dmellado, please, let us know if someone else can help. 14:33:01 did you try disabling the unnecessary services 14:33:02 so how about getting first the workload merged and then afterwards 14:33:09 let's strip it out 14:33:14 * mnaser can help with some stuff 14:33:18 mnaser: yeah, totally 14:33:27 but we can sync on that if you have the time ;) 14:33:42 yeah, feel free to pm (do we have an interop channel where everyone spends time at) 14:33:50 #action dmellado and mnaser work together to get CI working. 14:34:11 mnaser: we actually do, @openstack-interop 14:34:14 there is #openstack-interop channel. 14:34:17 # 14:34:24 okay, cool, thanks 14:34:32 so far only the doc gate is enabled 14:34:47 I do plan to enable at least some linters once the code's around 14:34:50 @dmellado, I would love to have more stuff tested at the gate. 14:35:04 we can do some linting but i think the more intresting part is having it tested against other clouds 14:35:14 ex: id love to gate it against our cloud? (or document how to do that) 14:35:23 hmm maybe a third party CI 14:35:32 following the "external CI" pattern used in other projects like Cinder or Neutron for example 14:35:38 mnaser: exactly 14:36:06 i'll do a bit of reading about that 14:36:06 I'll need to check if we can actually do that while disabling that job for the upstream one 14:36:09 as it just won't fit 14:36:22 @mnaser, so at present what are the options for the third party CI? 14:36:38 so every $provider can listen to changes at gerrit for the interop repo 14:36:53 and then it would check out the change, run it against their cloud, and report results back 14:37:21 tongli https://review.openstack.org/#/c/338139/ 14:37:28 that could be cool, of course at first I just won't count the 3rd party CI voting 14:37:30 see how there is external CIs of providers (IBM, etc) 14:37:37 * dmellado has some concerns about some external CIs, tbh 14:37:43 of course 14:37:45 ok. thanks for the info. 14:37:55 it's a step in the right direction at least 14:38:01 if 6 clouds fail even if they're all non voting 14:38:05 @mnaser, agreed. 14:38:16 i think we can agree we broke something :-P 14:38:25 haha. 14:38:47 #topic Boston SUmmit On Stage Keynote Committed Parties. 14:39:02 currently we have 11 parties, 5 spots left. 14:39:12 i am not sure how we can say we'd like to do it 14:39:25 if your company is not on the list, please make sure there is someone from your company commits to it. 14:39:31 hi all - question: is there a reason Ocata isn't targeted in that list? 14:39:48 where can we find "the list" :-p 14:39:49 tongli: Working on it! 14:40:00 We want to be there -- just need to make sure we can run the workload 14:40:27 @heisner, no, I just do not think we require Ocata or any particular version of OpenStack. 14:40:31 mnaser https://wiki.openstack.org/wiki/Interop_Challenge#Boston_Summit_On_Stage_Keynote_K8S_Demo_Commited_Parties 14:40:32 Or maybe I should enter there right now? 14:40:33 mnaser: https://wiki.openstack.org/wiki/Interop_Challenge 14:40:50 probably not many public/private production cloud uses Ocata yet. 14:41:20 beisner the idea is to test the workload on real-world products that people can get today. No particular version is required, the list there is just what versions people have productized right now. 14:41:25 luzC / dmellado : thanks, added 14:41:44 markvoelker, ok tyvm. 14:42:05 ok, only 4 spots left. 14:42:21 please review the patch sets, please please please. 14:42:22 OpenPower is working to be there too 14:42:48 @krtaylor, great. would love to see how the workload runs on the cloud. 14:42:49 making sure we can run the workload atm 14:43:22 can i ask what is stopping us from merging tongli work 14:43:22 @HelenYao, do you have any updates on the NFV workload? 14:43:33 tongli, thanks, I'll ping you in channel later with a few questions 14:43:55 @maser, good question, need +1 and +2s haha. 14:43:57 by 37 patch sets i think most of the things are ironed out (and its been over a month).. maybe good to get to the bottom of it 14:44:06 tongli: i am working on the test improvement 14:44:16 the deployment should be working 14:44:25 we can do improvements in further patches 14:44:27 the test cases are required to be polished 14:44:41 mnaser: just review time. Lots of folks traveling/out lately. I expect it'll land shortly (I know I'm planning to finish up this week) 14:44:48 okay coool 14:44:49 @HelenYao_, that is great. I have not had time to try it yet. thanks for working on it though. 14:45:22 HelonYao_ we test the deployment on the M version ? 14:45:50 it supports both M and previous version 14:45:53 tongli: I'll +2+A your k8s patch 14:46:09 let's work from that -into smaller patches, please- 14:46:26 I already +2 it 14:46:27 the deployment is tested on newton and mitaka 14:46:57 #@luzC, thanks. 14:47:01 luzC: saw it, that's why I think that +2+A it 14:47:07 and let's split the reviews from now on 14:47:12 #action smaller patches, please xD 14:48:16 folks could we discuss the nfv workload ? Please help review it 14:48:31 zhipeng: I'll review it again too 14:48:39 @zhipeng, agreed, please review that patch set as well, 14:48:42 HelenYao_ has been working crazy to meet the deadline Brad mentioned last time 14:48:48 overall, IIRC (and pls let me go again thru it before) 14:49:05 that it'd be great to have the tech details that HelenYao_ kindly explained 14:49:10 re: juju charms usage and so on 14:49:26 but I need to go again over it, will do that later on today 14:50:07 i think it is better to run the workload first to see if it works, get a visional sense :) 14:50:15 then dig into the tech details 14:51:05 yes. get a rough view about the process will help you better grasp the tech details 14:51:20 ping me anytime if you have problem 14:52:00 @HelenYao_, yes. that is right, thanks a lot for your help on the workload. 14:52:34 I'll get to that, thaks HelenYao_ and zhipeng ;) 14:53:03 any bugs and feedback are appreciated :) 14:53:17 another issue is that, let say if we were able to catch the deadline and also demo the nfv workload on stage 14:53:25 we might need actual pod on the stage 14:54:00 HelenYao_ plz correct me if i'm wrong, if we test the App behind VPN, there will be sound issue 14:54:24 so it is better to present the demo using actual hardware 14:54:32 like the OPNFV Doctor demo at Barcelona 14:54:51 yes, the VPN might block the multimedia transmission (voice/video) 14:55:31 in my lab, the VPN does not allow the multimedia 14:55:55 @HelenYao_, we got 5 minutes left. Can we pick up this topic next week if we run out time? 14:56:54 +1, I also need to leave 14:56:56 @HelenYao_, so we need to run this in a cloud which does not block voice and video? 14:56:57 thanks everyone! 14:57:13 @dmellado, thanks. 14:57:26 tongli: i am fine with discussing it next week 14:57:41 ok. let me put that on agenda for next week. 14:57:41 tongli: the VPN is supposed to allow the multimedia 14:58:02 sounds good, thx 14:58:10 #action place NFV workload requirement discussion on next week agenda. 14:59:05 ok, I think we pretty much run out of the time, any last minute item? 14:59:47 all right. I think we are good for today. 14:59:54 thanks, all 14:59:59 Thanks everyone! have a great day!! 15:00:05 #endmeeting