16:01:02 <kozhukalov> #startmeeting Fuel
16:01:03 <openstack> Meeting started Thu Apr  2 16:01:02 2015 UTC and is due to finish in 60 minutes.  The chair is kozhukalov. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:04 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:06 <openstack> The meeting name has been set to 'fuel'
16:01:12 <kozhukalov> #chair kozhukalov
16:01:13 <openstack> Current chairs: kozhukalov
16:01:18 <angdraug> hi
16:01:20 <docaedo> Hello
16:01:22 <kozhukalov> agenda as usual
16:01:23 <barthalion> hi there
16:01:23 <vkramskikh> hi
16:01:23 <mkwiek> hello
16:01:24 <prmtl> hey
16:01:35 <kozhukalov> #link https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
16:02:29 <agordeev> hi
16:02:41 <kozhukalov> just few topics to discuss
16:02:43 <xarses> hi
16:02:45 <akislitsky_> hi
16:02:55 <kozhukalov> #topic Openrc changes (xarses)
16:03:08 <xarses> I have been working to remove deps in fuel on /root/openrc. We are keeping it around for the administrators, but we shouldn't be using it anymore.
16:03:16 <xarses> The last commit open is #link https://review.openstack.org/#/c/158995/ the last revision failed CI, but I should have it passing today
16:03:34 <xarses> any questions?
16:03:57 <angdraug> #link https://bugs.launchpad.net/fuel/+bug/1396594
16:03:59 <openstack> Launchpad bug 1396594 in Fuel for OpenStack 6.1.x "/root/openrc is used for Neutron scripts" [High,Confirmed] - Assigned to Andrew Woodward (xarses)
16:04:09 <angdraug> ^ got reopened today, anyone can comment why?
16:04:15 <angdraug> aglarendil: xarses: ^
16:05:55 <xarses> huh, aglarendil, changed it. I will have to follow up with him
16:06:14 <angdraug> was that because https://review.openstack.org/#/c/158995/ refers to that bug in commit message as Related?
16:07:06 <xarses> openstack infra should have done it if it was part of that i guess
16:07:15 <angdraug> any doc changes to prevent people from using openrc in the future?
16:07:22 <kozhukalov> aglarendil: is in the office but is not going to join us
16:07:29 <kozhukalov> maybe later
16:08:00 <xarses> angdraug: other than we need to post to the ML, I'm not sure
16:08:09 <angdraug> maybe a subsection for dev guid?
16:08:12 <angdraug> guide?
16:08:32 <angdraug> e.g. "How to use Keystone credentials in fuel-library"
16:08:51 <xarses> ok
16:09:03 <zigo> hi
16:09:11 <angdraug> moving on?
16:09:14 <kozhukalov> yes
16:09:25 <aglarendil> actually I am here
16:09:25 <kozhukalov> looks like no one has any q
16:09:42 <aglarendil> Just answering xarses question
16:09:47 <aglarendil> here is a review https://review.openstack.org/158995
16:10:03 <aglarendil> It says it is related to openrc usage
16:10:08 <aglarendil> and it is still not merged
16:10:09 <kozhukalov> aglarendil: ^^^ can you give us the reason why this https://bugs.launchpad.net/fuel/+bug/1396594 was reopened
16:10:11 <openstack> Launchpad bug 1396594 in Fuel for OpenStack 6.1.x "/root/openrc is used for Neutron scripts" [High,Confirmed] - Assigned to Andrew Woodward (xarses)
16:10:31 <angdraug> aglarendil: so my guess was right, you want to keep that bug open until 158995 is merged?
16:10:53 <kozhukalov> angdraug: yes, you were right
16:10:56 <kozhukalov> moving on
16:10:58 <aglarendil> angdraug: yes
16:11:14 <kozhukalov> #topic 200 nodes status (dbelova)
16:12:24 <DinaBelova> folkf, I simply want to ask you about two bugs statuses
16:12:40 <DinaBelova> https://bugs.launchpad.net/fuel/+bug/1431547 and https://bugs.launchpad.net/fuel/+bug/1438127
16:12:42 <openstack> Launchpad bug 1431547 in Fuel for OpenStack 6.1.x "Public VIP is inaccessible from external networks" [Critical,In progress] - Assigned to Oleksiy Molchanov (omolchanov)
16:12:43 <openstack> Launchpad bug 1438127 in Fuel for OpenStack "No internet access at fuel plugin pre-deploy, ubuntu" [High,Confirmed] - Assigned to Oleksiy Molchanov (omolchanov)
16:13:17 <kozhukalov> DinaBelova: could you please give us the status of this feature? only those two bugs are actual? right?
16:13:28 <kozhukalov> everything else works?
16:14:07 <DinaBelova> kozhukalovo, may we please postpone my topic?
16:14:11 <DinaBelova> I'm on call right now
16:14:27 <kozhukalov> aglarendil: ^^^ can you give a comment on the first one?
16:14:42 <aglarendil> yep
16:14:55 <aglarendil> the actual problem is incorrect routing table for haproxy namespaces
16:14:56 <kozhukalov> DinaBelova: ahh, ok
16:15:20 <aglarendil> the solution is to route all the traffic that is routable through host controller through hapr-ns interface
16:15:34 <aglarendil> Oleksii Molchanov is working on the fix right now
16:16:14 <aglarendil> workaround is pretty simple - get cidrs of your networks like internal/storage/whatever and run
16:16:23 <kozhukalov> and the second one is also up to him? right?
16:16:41 <aglarendil> `ip netns exec haproxy ip r r <cidr> via 240.0.0.1 dev hapr-ns`
16:17:21 <aglarendil> kozhukalov: We did not start investigation yet
16:17:34 <aglarendil> kozhukalov: but yes, I think we already have the solution in mind
16:17:38 <mihgen> As I understand, alwex has a workaround already for this VIP issue
16:18:42 <kozhukalov> aglarendil: ok, thanx
16:19:32 <kozhukalov> another bug which was found by scale team couple days ago
16:20:00 <kozhukalov> is that for IBP + centos admin interface was not properly configured
16:20:08 <kozhukalov> it's been fixed
16:20:45 <kozhukalov> #link https://review.openstack.org/#/c/169736/
16:21:00 <kozhukalov> and another issue is still not completely clear
16:21:21 <kozhukalov> it is related to the lack of resources on the master node
16:21:36 <DinaBelova> folks, I'm here again :)
16:21:43 * DinaBelova looking throug th elogs
16:22:20 <kozhukalov> DinaBelova: have you managed to increase master node resources and test 200 nodes again?
16:22:36 <barthalion> https://review.openstack.org/#/c/169736/ it has hit me really hard some time ago :(
16:22:39 <barthalion> I mean, the bug
16:22:44 <barthalion> so cheers for fixing it
16:22:45 <kozhukalov> ^^^ there was an issue
16:22:51 <DinaBelova> @kozhukalov - it's in progress, as redeployment took much time
16:23:10 <DinaBelova> so we'll see the results later today
16:23:43 <kozhukalov> DinaBelova: great
16:23:49 <kozhukalov> looking forward
16:23:54 <DinaBelova> so basically issue with public VIPs unaccessible from the external network is preventing us nut just from 200 nodes feature certification
16:24:07 <DinaBelova> but also from the failure/stability testing
16:24:21 <DinaBelova> as we wanted to certify 6.1 from this point of view as well
16:24:29 <kozhukalov> like aglarendil said it is quite easy to fix it
16:24:42 <DinaBelova> if so, it will be perfect
16:24:49 <kozhukalov> his ETA for this is optimistic
16:24:50 <DinaBelova> and we're waiting SO MUCH for this fix
16:25:02 <DinaBelova> as theoretically we needed to start this failure testing this week
16:25:11 <DinaBelova> but it's not possible now
16:25:16 <maximov> Dina you should have a workaround for VIP issue
16:25:19 <aglarendil> folks, Oleksii is working on it already - give us a day and everything will be fine
16:25:24 <DinaBelova> @maximov - we use it
16:25:42 <kozhukalov> ok, if there are no other q, moving on
16:25:42 <DinaBelova> the issue is that this workaround does not work with the killing controllers :)
16:26:10 <DinaBelova> VIPs start migrating, and workaround works only if all VIPs are on one controller
16:26:18 <maximov> ok I see
16:26:22 <DinaBelova> so now we're unblocked speaking about simple testing
16:26:33 <DinaBelova> usual dataplane and controlplane
16:26:42 <DinaBelova> but as for the failure, we're still blocked
16:27:04 <kozhukalov> #topic IBP (agordeev)
16:27:18 <agordeev> Build script rewriting is still in progress.
16:27:19 <agordeev> Also bash version of build script is not fixed yet. These two patches should be merged  https://review.openstack.org/#/c/168090/ https://review.openstack.org/#/c/166774/ in order to fix it. At least our scale lab and BVT jobs are affected by those two.
16:28:04 <maximov> @agordeev do we have ETA for build script?
16:28:10 <kozhukalov> dpyzhov_: can we merge those two?
16:28:19 <kozhukalov> dpyzhov_: ^^^
16:29:09 <agordeev> maximov: we are going to start to be able execute very raw version of it by tomorrow
16:29:46 <agordeev> about ETA, I'd say 1w
16:30:06 <dpyzhov_> I'm looking on this fixes
16:30:28 <dpyzhov_> ok, merging
16:30:59 <kozhukalov> another critical bug for IBP is non working progress bar
16:31:22 <kozhukalov> during building of target images and provisioning
16:31:44 <kozhukalov> afaik warpc__ is going to implement this tomorrow
16:31:51 <kozhukalov> will see
16:32:00 <kozhukalov> dpyzhov_: ok, thanx
16:32:32 <kozhukalov> btw there is work-in-progress version of python build script
16:32:43 <kozhukalov> there are two patches
16:32:55 <kozhukalov> https://review.openstack.org/#/c/167705 and https://review.openstack.org/#/c/169413
16:33:47 <kozhukalov> ok, if there are no any q here, let's move on
16:33:54 <warpc__> kozhukalov: yes, i am going to fix it tomorrow if no new urgent bugs arise
16:34:29 <kozhukalov> warpc__: thanx for ack
16:34:48 <kozhukalov> #topic Open discussion
16:35:03 <kozhukalov> actually i have a question
16:35:06 <mattymo> I just want to point out I'm really happy with the IBP transition to working on Fuel master. It's been really stable and pleasant
16:35:25 <mattymo> good job kozhukalov and agordeev
16:35:42 <kozhukalov> mattymo: thanx, trying to do our best
16:35:43 <bookwar> while everyone is here, can I ask for a merge https://review.openstack.org/#/c/169025/
16:36:05 <maximov> regarding external ubuntu. are we going to run somesort of CI to test fuel deployments against upstream ubuntu after 6.1 GA
16:36:06 <maximov> ?
16:36:59 <bookwar> you mean fix the iso and just update the mirror?
16:37:03 <kozhukalov> maximov: we absolutely need this
16:37:42 <maximov> ok, who is in charge? @holser?
16:37:58 <kozhukalov> our default settings assume we use upstream mirrors (which are on the Internet)
16:38:14 <maximov> so they will be updated periodically
16:38:16 <maximov> right?
16:38:19 <bookwar> with external ubuntu we have a lot of variants we need to test on CI, we discussed it with Vitaliy Parakhin
16:38:30 <bookwar> i plane to write a summary
16:38:41 <kozhukalov> so, it is one of the major requirements that fuel is able to deploy cluster from online repos
16:38:45 <barthalion> what happened to the proposed packages testing?
16:38:48 <bookwar> s/plane/plan/
16:39:02 <maximov> ok, please keep me in the loop
16:39:07 <bookwar> barthalion: pawel is working on it
16:39:25 <bookwar> should be enabled at the end of this week i think
16:40:18 <bookwar> kozhukalov: we need to discuss the test plan, if it will be only bvt or half of the swarm or something else
16:40:52 <agordeev> also a q. https://bugs.launchpad.net/fuel/+bug/1438913 in #2 comment was mentioned blueprint 'continue on minor failure'. Did just IBP lack of it? Will classic provioning continue to deploy a cluster if one of compute node failed to be provisioned?
16:40:54 <openstack> Launchpad bug 1438913 in Fuel for OpenStack "Stuck on unmounting os-root on one of the nodes during deployment" [High,New] - Assigned to Fuel provisioning team (fuel-provisioning)
16:40:54 <kozhukalov> ikalnitsky was supposed to prepare this  test plan
16:42:37 <kozhukalov> agordeev: no, our current logic assumes all nodes need to be provisioned properly before starting deployment
16:43:25 <bookwar> kozhukalov: currently we agreed to run BVT and it is what we are implementing
16:43:39 <kozhukalov> ok, guys i have a question about our weekly meeting format
16:43:39 <bookwar> but i didn't see it formally written somewhere
16:44:01 <kozhukalov> bookwar: we need to ask ikalnitsky
16:44:26 <agordeev> kozhukalov: i knew about our current logic. I'm not sure do we have bug here. If classic will deploy succesfully without 1 compute node, we definately have a bug here.
16:44:58 <kozhukalov> the q about meeting format is: what do you guys think about how we can improve this and make weekly irc more valued?
16:46:02 <kozhukalov> afaik, clussic deployment is also gonna fail if one of the nodes was not provisioned
16:46:38 <kozhukalov> any opinions about weekly meeting?
16:47:04 <kozhukalov> do we need to start another discussion in ML?
16:47:33 <kozhukalov> i mean for me it looks like people tend to avoid using this communication channel
16:48:25 <kozhukalov> i'd like it to be the place where we can share status of what we are working on in different locations
16:48:48 <barthalion> I think it's fine as is
16:49:00 <barthalion> it's not like we always have something important to say
16:49:05 <bookwar> kozhukalov: may be announcing agenda a bit earlier then the hour before the meeting could help
16:49:06 <kozhukalov> i don't think so
16:49:13 <angdraug> I don't think people avoid this meeting, they just don't put enough effort into preparing for it
16:49:18 <kozhukalov> we usually have couple topics in agenda
16:49:43 <kozhukalov> and those topics appears after annoying reminders
16:50:14 <angdraug> reminders are useful and not annoying, please keep sending them
16:50:36 <kozhukalov> angdraug: ok, what can we do to make them interested in deeper preparation?
16:51:41 <angdraug> volunteer team leads to present their team's activities and let them delegate specific topics to their developers
16:51:43 <docaedo> To some extent I think the meeting doesn't have huge participation because we already cover much on the openstack mailing list when we need broader participation
16:51:54 <docaedo> (and that's a good thing)
16:51:59 <prmtl> folks, I have one quick thing to add
16:52:23 <docaedo> I like angdraug's suggestion, that would be nice to get a weekly recap from team leads
16:52:26 <angdraug> if we have a big discussion on openstack-dev, here is a good place to highlight that discussion
16:52:49 <kozhukalov> angdraug: docaedo +1
16:53:54 <prmtl> from some time we have "gate-*" jobs as a way to see if tests after merge do not fail
16:53:54 <prmtl> for example https://fuel-jenkins.mirantis.com/job/gate-fuel-web/
16:53:54 <prmtl> I saw a bug https://bugs.launchpad.net/fuel/+bug/1439732 few minutes before
16:53:55 <openstack> Launchpad bug 1439732 in Fuel for OpenStack "Add "puppet_log" cluster attribute to installation_info white list " [Critical,Fix committed] - Assigned to Alexander Kislitsky (akislitsky)
16:53:55 <prmtl> folks, please pay attention to the e-mails that you get from CI after your patch get merged and fail
16:53:57 <prmtl> it's all for avoiding broken master
16:54:38 <prmtl> I think that next step should be to report failed 'gate-*' jobs as a critical bugs automatically
16:55:05 <kozhukalov> prmtl: we usually don't merge code which is -1 from CI
16:55:33 <prmtl> but I already saw two cases when gate-* was red and author of the change did nothing
16:55:49 <bookwar> kozhukalov: it happens when you merge two patches with +1 but result fails th CI
16:56:46 <kozhukalov> ok, about meeting, im going to write another letter about this and describe angdraug's suggestion and will see whether it'll help
16:57:03 <kozhukalov> prmtl: thanx for brining this up
16:57:17 <kozhukalov> if no q any more let's end
16:57:27 <kozhukalov> thanx everyone for attending
16:57:35 <kozhukalov> #endmeeting