16:01:02 #startmeeting Fuel 16:01:03 Meeting started Thu Apr 2 16:01:02 2015 UTC and is due to finish in 60 minutes. The chair is kozhukalov. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:04 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:06 The meeting name has been set to 'fuel' 16:01:12 #chair kozhukalov 16:01:13 Current chairs: kozhukalov 16:01:18 hi 16:01:20 Hello 16:01:22 agenda as usual 16:01:23 hi there 16:01:23 hi 16:01:23 hello 16:01:24 hey 16:01:35 #link https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda 16:02:29 hi 16:02:41 just few topics to discuss 16:02:43 hi 16:02:45 hi 16:02:55 #topic Openrc changes (xarses) 16:03:08 I have been working to remove deps in fuel on /root/openrc. We are keeping it around for the administrators, but we shouldn't be using it anymore. 16:03:16 The last commit open is #link https://review.openstack.org/#/c/158995/ the last revision failed CI, but I should have it passing today 16:03:34 any questions? 16:03:57 #link https://bugs.launchpad.net/fuel/+bug/1396594 16:03:59 Launchpad bug 1396594 in Fuel for OpenStack 6.1.x "/root/openrc is used for Neutron scripts" [High,Confirmed] - Assigned to Andrew Woodward (xarses) 16:04:09 ^ got reopened today, anyone can comment why? 16:04:15 aglarendil: xarses: ^ 16:05:55 huh, aglarendil, changed it. I will have to follow up with him 16:06:14 was that because https://review.openstack.org/#/c/158995/ refers to that bug in commit message as Related? 16:07:06 openstack infra should have done it if it was part of that i guess 16:07:15 any doc changes to prevent people from using openrc in the future? 16:07:22 aglarendil: is in the office but is not going to join us 16:07:29 maybe later 16:08:00 angdraug: other than we need to post to the ML, I'm not sure 16:08:09 maybe a subsection for dev guid? 16:08:12 guide? 16:08:32 e.g. "How to use Keystone credentials in fuel-library" 16:08:51 ok 16:09:03 hi 16:09:11 moving on? 16:09:14 yes 16:09:25 actually I am here 16:09:25 looks like no one has any q 16:09:42 Just answering xarses question 16:09:47 here is a review https://review.openstack.org/158995 16:10:03 It says it is related to openrc usage 16:10:08 and it is still not merged 16:10:09 aglarendil: ^^^ can you give us the reason why this https://bugs.launchpad.net/fuel/+bug/1396594 was reopened 16:10:11 Launchpad bug 1396594 in Fuel for OpenStack 6.1.x "/root/openrc is used for Neutron scripts" [High,Confirmed] - Assigned to Andrew Woodward (xarses) 16:10:31 aglarendil: so my guess was right, you want to keep that bug open until 158995 is merged? 16:10:53 angdraug: yes, you were right 16:10:56 moving on 16:10:58 angdraug: yes 16:11:14 #topic 200 nodes status (dbelova) 16:12:24 folkf, I simply want to ask you about two bugs statuses 16:12:40 https://bugs.launchpad.net/fuel/+bug/1431547 and https://bugs.launchpad.net/fuel/+bug/1438127 16:12:42 Launchpad bug 1431547 in Fuel for OpenStack 6.1.x "Public VIP is inaccessible from external networks" [Critical,In progress] - Assigned to Oleksiy Molchanov (omolchanov) 16:12:43 Launchpad bug 1438127 in Fuel for OpenStack "No internet access at fuel plugin pre-deploy, ubuntu" [High,Confirmed] - Assigned to Oleksiy Molchanov (omolchanov) 16:13:17 DinaBelova: could you please give us the status of this feature? only those two bugs are actual? right? 16:13:28 everything else works? 16:14:07 kozhukalovo, may we please postpone my topic? 16:14:11 I'm on call right now 16:14:27 aglarendil: ^^^ can you give a comment on the first one? 16:14:42 yep 16:14:55 the actual problem is incorrect routing table for haproxy namespaces 16:14:56 DinaBelova: ahh, ok 16:15:20 the solution is to route all the traffic that is routable through host controller through hapr-ns interface 16:15:34 Oleksii Molchanov is working on the fix right now 16:16:14 workaround is pretty simple - get cidrs of your networks like internal/storage/whatever and run 16:16:23 and the second one is also up to him? right? 16:16:41 `ip netns exec haproxy ip r r via 240.0.0.1 dev hapr-ns` 16:17:21 kozhukalov: We did not start investigation yet 16:17:34 kozhukalov: but yes, I think we already have the solution in mind 16:17:38 As I understand, alwex has a workaround already for this VIP issue 16:18:42 aglarendil: ok, thanx 16:19:32 another bug which was found by scale team couple days ago 16:20:00 is that for IBP + centos admin interface was not properly configured 16:20:08 it's been fixed 16:20:45 #link https://review.openstack.org/#/c/169736/ 16:21:00 and another issue is still not completely clear 16:21:21 it is related to the lack of resources on the master node 16:21:36 folks, I'm here again :) 16:21:43 * DinaBelova looking throug th elogs 16:22:20 DinaBelova: have you managed to increase master node resources and test 200 nodes again? 16:22:36 https://review.openstack.org/#/c/169736/ it has hit me really hard some time ago :( 16:22:39 I mean, the bug 16:22:44 so cheers for fixing it 16:22:45 ^^^ there was an issue 16:22:51 @kozhukalov - it's in progress, as redeployment took much time 16:23:10 so we'll see the results later today 16:23:43 DinaBelova: great 16:23:49 looking forward 16:23:54 so basically issue with public VIPs unaccessible from the external network is preventing us nut just from 200 nodes feature certification 16:24:07 but also from the failure/stability testing 16:24:21 as we wanted to certify 6.1 from this point of view as well 16:24:29 like aglarendil said it is quite easy to fix it 16:24:42 if so, it will be perfect 16:24:49 his ETA for this is optimistic 16:24:50 and we're waiting SO MUCH for this fix 16:25:02 as theoretically we needed to start this failure testing this week 16:25:11 but it's not possible now 16:25:16 Dina you should have a workaround for VIP issue 16:25:19 folks, Oleksii is working on it already - give us a day and everything will be fine 16:25:24 @maximov - we use it 16:25:42 ok, if there are no other q, moving on 16:25:42 the issue is that this workaround does not work with the killing controllers :) 16:26:10 VIPs start migrating, and workaround works only if all VIPs are on one controller 16:26:18 ok I see 16:26:22 so now we're unblocked speaking about simple testing 16:26:33 usual dataplane and controlplane 16:26:42 but as for the failure, we're still blocked 16:27:04 #topic IBP (agordeev) 16:27:18 Build script rewriting is still in progress. 16:27:19 Also bash version of build script is not fixed yet. These two patches should be merged https://review.openstack.org/#/c/168090/ https://review.openstack.org/#/c/166774/ in order to fix it. At least our scale lab and BVT jobs are affected by those two. 16:28:04 @agordeev do we have ETA for build script? 16:28:10 dpyzhov_: can we merge those two? 16:28:19 dpyzhov_: ^^^ 16:29:09 maximov: we are going to start to be able execute very raw version of it by tomorrow 16:29:46 about ETA, I'd say 1w 16:30:06 I'm looking on this fixes 16:30:28 ok, merging 16:30:59 another critical bug for IBP is non working progress bar 16:31:22 during building of target images and provisioning 16:31:44 afaik warpc__ is going to implement this tomorrow 16:31:51 will see 16:32:00 dpyzhov_: ok, thanx 16:32:32 btw there is work-in-progress version of python build script 16:32:43 there are two patches 16:32:55 https://review.openstack.org/#/c/167705 and https://review.openstack.org/#/c/169413 16:33:47 ok, if there are no any q here, let's move on 16:33:54 kozhukalov: yes, i am going to fix it tomorrow if no new urgent bugs arise 16:34:29 warpc__: thanx for ack 16:34:48 #topic Open discussion 16:35:03 actually i have a question 16:35:06 I just want to point out I'm really happy with the IBP transition to working on Fuel master. It's been really stable and pleasant 16:35:25 good job kozhukalov and agordeev 16:35:42 mattymo: thanx, trying to do our best 16:35:43 while everyone is here, can I ask for a merge https://review.openstack.org/#/c/169025/ 16:36:05 regarding external ubuntu. are we going to run somesort of CI to test fuel deployments against upstream ubuntu after 6.1 GA 16:36:06 ? 16:36:59 you mean fix the iso and just update the mirror? 16:37:03 maximov: we absolutely need this 16:37:42 ok, who is in charge? @holser? 16:37:58 our default settings assume we use upstream mirrors (which are on the Internet) 16:38:14 so they will be updated periodically 16:38:16 right? 16:38:19 with external ubuntu we have a lot of variants we need to test on CI, we discussed it with Vitaliy Parakhin 16:38:30 i plane to write a summary 16:38:41 so, it is one of the major requirements that fuel is able to deploy cluster from online repos 16:38:45 what happened to the proposed packages testing? 16:38:48 s/plane/plan/ 16:39:02 ok, please keep me in the loop 16:39:07 barthalion: pawel is working on it 16:39:25 should be enabled at the end of this week i think 16:40:18 kozhukalov: we need to discuss the test plan, if it will be only bvt or half of the swarm or something else 16:40:52 also a q. https://bugs.launchpad.net/fuel/+bug/1438913 in #2 comment was mentioned blueprint 'continue on minor failure'. Did just IBP lack of it? Will classic provioning continue to deploy a cluster if one of compute node failed to be provisioned? 16:40:54 Launchpad bug 1438913 in Fuel for OpenStack "Stuck on unmounting os-root on one of the nodes during deployment" [High,New] - Assigned to Fuel provisioning team (fuel-provisioning) 16:40:54 ikalnitsky was supposed to prepare this test plan 16:42:37 agordeev: no, our current logic assumes all nodes need to be provisioned properly before starting deployment 16:43:25 kozhukalov: currently we agreed to run BVT and it is what we are implementing 16:43:39 ok, guys i have a question about our weekly meeting format 16:43:39 but i didn't see it formally written somewhere 16:44:01 bookwar: we need to ask ikalnitsky 16:44:26 kozhukalov: i knew about our current logic. I'm not sure do we have bug here. If classic will deploy succesfully without 1 compute node, we definately have a bug here. 16:44:58 the q about meeting format is: what do you guys think about how we can improve this and make weekly irc more valued? 16:46:02 afaik, clussic deployment is also gonna fail if one of the nodes was not provisioned 16:46:38 any opinions about weekly meeting? 16:47:04 do we need to start another discussion in ML? 16:47:33 i mean for me it looks like people tend to avoid using this communication channel 16:48:25 i'd like it to be the place where we can share status of what we are working on in different locations 16:48:48 I think it's fine as is 16:49:00 it's not like we always have something important to say 16:49:05 kozhukalov: may be announcing agenda a bit earlier then the hour before the meeting could help 16:49:06 i don't think so 16:49:13 I don't think people avoid this meeting, they just don't put enough effort into preparing for it 16:49:18 we usually have couple topics in agenda 16:49:43 and those topics appears after annoying reminders 16:50:14 reminders are useful and not annoying, please keep sending them 16:50:36 angdraug: ok, what can we do to make them interested in deeper preparation? 16:51:41 volunteer team leads to present their team's activities and let them delegate specific topics to their developers 16:51:43 To some extent I think the meeting doesn't have huge participation because we already cover much on the openstack mailing list when we need broader participation 16:51:54 (and that's a good thing) 16:51:59 folks, I have one quick thing to add 16:52:23 I like angdraug's suggestion, that would be nice to get a weekly recap from team leads 16:52:26 if we have a big discussion on openstack-dev, here is a good place to highlight that discussion 16:52:49 angdraug: docaedo +1 16:53:54 from some time we have "gate-*" jobs as a way to see if tests after merge do not fail 16:53:54 for example https://fuel-jenkins.mirantis.com/job/gate-fuel-web/ 16:53:54 I saw a bug https://bugs.launchpad.net/fuel/+bug/1439732 few minutes before 16:53:55 Launchpad bug 1439732 in Fuel for OpenStack "Add "puppet_log" cluster attribute to installation_info white list " [Critical,Fix committed] - Assigned to Alexander Kislitsky (akislitsky) 16:53:55 folks, please pay attention to the e-mails that you get from CI after your patch get merged and fail 16:53:57 it's all for avoiding broken master 16:54:38 I think that next step should be to report failed 'gate-*' jobs as a critical bugs automatically 16:55:05 prmtl: we usually don't merge code which is -1 from CI 16:55:33 but I already saw two cases when gate-* was red and author of the change did nothing 16:55:49 kozhukalov: it happens when you merge two patches with +1 but result fails th CI 16:56:46 ok, about meeting, im going to write another letter about this and describe angdraug's suggestion and will see whether it'll help 16:57:03 prmtl: thanx for brining this up 16:57:17 if no q any more let's end 16:57:27 thanx everyone for attending 16:57:35 #endmeeting