16:02:03 <kozhukalov> #startmeeting Fuel
16:02:04 <openstack> Meeting started Thu Apr 30 16:02:03 2015 UTC and is due to finish in 60 minutes.  The chair is kozhukalov. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:02:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:02:08 <openstack> The meeting name has been set to 'fuel'
16:02:14 <kozhukalov> #chair kozhukalov
16:02:15 <openstack> Current chairs: kozhukalov
16:02:20 <kozhukalov> hey guys
16:02:22 <mkwiek> hello
16:02:26 <docaedo> hello
16:02:26 <kaliya> hi
16:02:27 <prmtl> hi
16:02:27 <kozhukalov> who is here?
16:02:28 <akislitsky> hi
16:02:29 <daniel3_> hello
16:02:33 <xarses> hi
16:02:42 <mihgen> hi
16:02:49 <angdraug> o/
16:02:59 <kozhukalov> ok, looks like we can start
16:03:13 <kozhukalov> agenda as usual
16:03:23 <kozhukalov> #link https://etherpad.openstack.org/p/fuel-weekly-meeting-agenda
16:03:38 <kozhukalov> #topic is Broken master
16:03:38 <zongliang> Ihi
16:04:05 <kozhukalov> the reason is https://bugs.launchpad.net/fuel/+bug/1403088
16:04:05 <openstack> Launchpad bug 1403088 in Fuel for OpenStack 6.1.x "CentOS repository contains ruby conflicts - unable to run yum update plainly" [Critical,In progress] - Assigned to Artem Silenkov (asilenkov)
16:04:23 <kozhukalov> and we are waiting for OSCI team fixes this
16:04:26 <mihgen> why didn't we revert it?
16:04:51 <kozhukalov> anyone from OSCI around?
16:05:11 <kozhukalov> looks like no one
16:05:13 <angdraug> rvyalov: ^
16:05:40 <angdraug> latest update from Artem is that the 6.1 ISO that's currently building has the necessary fixes
16:05:42 <kozhukalov> mihgen: I don't know why we didn't revert this
16:06:03 <kozhukalov> as far as i know it is broken since yesterday
16:06:37 <dburmistrov> asilenkov: <mihgen> why didn't we revert it?
16:06:44 <kozhukalov> angdraug: great, hope it gonna become grean
16:06:52 <angdraug> #link https://bugs.launchpad.net/fuel/+bug/1403088/comments/29
16:06:52 <openstack> Launchpad bug 1403088 in Fuel for OpenStack 6.1.x "CentOS repository contains ruby conflicts - unable to run yum update plainly" [Critical,In progress] - Assigned to Artem Silenkov (asilenkov)
16:06:58 <angdraug> status report as of yesterday
16:07:26 <asilenkov> we can't revert this because ruby packets is not under version control system
16:07:31 <kozhukalov> angdraug: thanks for this link
16:07:59 <kozhukalov> asilenkov: thanks for fix
16:08:15 <asilenkov> not yet we need iso built first
16:08:25 <asilenkov> let me explain a little
16:08:35 <angdraug> asilenkov: please make sure that the new ISO fixes it, if there's a new problem found with it we need to know about it and fix it today
16:08:43 <asilenkov> 1. ruby version and name is hardcoded inside ISO make system
16:08:56 <asilenkov> yes sure. I'm working hard on it
16:09:20 <asilenkov> 2. We have a bunch of gems which are named in bad manner
16:09:26 <asilenkov> Eg'
16:09:57 <mihgen> :(
16:10:05 <mihgen> I hope we will fix it finally in the right way
16:10:09 <asilenkov> ruby21-nailgun-mcagents is the same as nailgun-mcagents
16:10:25 <asilenkov> and nailgun-mcagents wanted ruby=1.8
16:10:44 <asilenkov> some gems were not accidentaly renamed though
16:11:08 <asilenkov> some gems has no ruby nailed and take one available
16:11:37 <asilenkov> so we had to recompiled all the gems to overcome this
16:11:53 <asilenkov> ti ensure we have right ruby with right gems
16:12:18 <kozhukalov> ok, looks like the situation is under control
16:12:27 <kozhukalov> if there are no other comments on this, let's move on
16:12:28 <asilenkov> ISO build system is not aware about ruby so we had to hack hardcoded values in bootstrap and sandbox
16:13:28 <asilenkov> yes 5.1.2 fixed 6,1 is on the way. It builds locally but not sure about BVT. I'll take care of it untill all problems are resolved. Will report by email and slack. TY
16:13:36 <kozhukalov> yes and these hard coded versions were introduced because we had some problems with ruby versions earlier
16:14:22 <kozhukalov> asilenkov: thanks a lot for your efforts
16:14:25 <kozhukalov> moving on
16:14:39 <kozhukalov> #topic Recent activities in fuel-python
16:14:54 <kozhukalov> actually i have nothing to say here
16:15:07 <kozhukalov> all known activities are listed in agenda
16:15:35 <kozhukalov> please skim through the list and ask questions
16:16:15 <kozhukalov> cleaning up environment after building image is in progress
16:16:52 <kozhukalov> the issue here is that we need to take care of those loop devices and other temporary files if build fails
16:16:57 <angdraug> should we discuss apt vs ipv6?
16:17:15 <kozhukalov> yes, I'd like to discuss this
16:17:18 <angdraug> looking at agenda seems like there's a disagreement on this topic
16:17:43 <kozhukalov> my opinion is clear, Im for forcing apt to use v4
16:18:10 <kozhukalov> #link https://review.openstack.org/#/c/175848/
16:18:23 <kozhukalov> everyone agree with v4
16:18:36 <xarses> I don't understand why this is necessary
16:18:58 <kozhukalov> except one guys who is an expert in networks
16:19:43 <kozhukalov> xarses: because canonical dns responses with v4 and v6 addresses
16:19:52 <kozhukalov> and apt tries to use v6
16:20:00 <kozhukalov> and obviously fails
16:20:23 <kozhukalov> because neither we nor most our customers don't support v6
16:20:37 <kozhukalov> because neither we nor most our customers support v6
16:20:53 <xarses> so you intend to kick the problem down the road
16:20:56 <kozhukalov> and thus puppet can not fetch packages
16:20:58 <xarses> instead of fixing the problem?
16:21:23 <kozhukalov> xarses: what is your suggestion how we can fix this?
16:21:44 <xarses> and we are actively interested in supporting IPv6
16:21:45 <kozhukalov> force people to start supporting v6?
16:22:15 <xarses> by the sound of the issue, it should only use the v6 address when the v4 fails
16:22:15 <kozhukalov> we can support but we can not force other people to support
16:22:44 <xarses> did we identify why the v4 fails when both are supported, but works when v6 is disabled?
16:22:54 <kozhukalov> v6 fails
16:22:57 <kozhukalov> not v4
16:23:41 <xarses> #link https://bugs.launchpad.net/fuel/+bug/1446227/comments/12
16:23:41 <openstack> Launchpad bug 1442672 in Fuel for OpenStack "duplicate for #1446227 All lnx providers for L2 resources of L23network can't work without installing ovs" [High,Fix committed] - Assigned to Sergey Vasilenko (xenolog)
16:23:44 <kozhukalov> ok, i see your point but i don't have any particular recipe
16:24:29 <xarses> so this fixes a high/ critical bug, but we still have a problem in that we blacklisted v6 in the process
16:24:56 <xarses> we need to create a bug for 7.0 to learn why this occurred all the sudden, and implement a proper fix
16:25:21 <kozhukalov> but switching to v4 solves the problem as far as i know, so this comment might be not totally true
16:25:48 <kozhukalov> xarses: will you?
16:26:06 <xarses> yes, I will create a bug for 7
16:26:20 <angdraug> based on what xarses said above, switching to v4 masks the problem, while we still don't know the root cause
16:26:48 <kozhukalov> #action xarses creates a bug for 7.0 about ipv6 and default route
16:26:52 <angdraug> which is good enough for 6.1 but I agree that we should investigate this further in 7.0
16:26:54 <xarses> It's ok if we need to make hacks like this near a release, it however also means that we need to follow up later
16:27:42 <kozhukalov> ok, any other questions here on this topic?
16:28:04 <kozhukalov> moving on
16:28:06 <xarses> filtering device
16:28:13 <kozhukalov> listening
16:28:40 <xarses> are we in sync with what nailgun-agent does for filtering now, or are they still at odds?
16:29:23 <xarses> including being able to over-ride the removable device patterns
16:29:58 <kozhukalov> agordeev: was going to prepare another patch for nailgun-agent, but he is going on vacation, so if he does not, I will
16:30:26 <xarses> should I file a bug for that or do you have one to work under?
16:30:53 <kozhukalov> xarses: not sure if we have such a bug
16:31:21 <xarses> ok, I will create a bug for that too, you can mark it duplicate if you have another
16:31:21 <kozhukalov> but I have this in my notes
16:31:45 <kozhukalov> xarses: yes, create please and assign it to me
16:31:54 <xarses> ok, lets continue
16:32:14 <kozhukalov> #topic How close we are to HCF?
16:32:59 <kozhukalov> so, some stats are written down in agenda
16:33:25 <kozhukalov> we have 88 bugs on fuel and 30 of them is on fuel-python
16:33:55 <kozhukalov> this number includes all open bugs (critical and high)
16:34:12 <xarses> does that include docs bugs?
16:34:14 <kozhukalov> including those which are in-progress
16:34:29 <kozhukalov> no, i don't think so
16:35:05 <kozhukalov> i use http://lp-reports.vm.mirantis.net/
16:35:18 <kozhukalov> i don't know if it is public or not
16:35:38 <kozhukalov> it'd be great to open this for public
16:35:58 <kozhukalov> Partners        6 bugs
16:36:01 <mihgen> we need to add authx first
16:36:05 <kozhukalov> MOS OpenStack        32 bugs
16:36:13 <kozhukalov> MOS Linux        10 bugs
16:36:21 <kozhukalov> Unknown        15 bugs
16:36:47 <kozhukalov> mihgen: if there is ETA when it could be done?
16:36:58 <kozhukalov> i mean auth for this service
16:38:36 <kozhukalov> ok, looks like we still have lots of bugs but we've been tending to speed up this week
16:38:51 <kozhukalov> moving on
16:39:06 <kozhukalov> #topic Labor day
16:39:15 <kozhukalov> just announcement
16:39:49 <kozhukalov> Russian, Ukrainian and Polish teams are gonna be unavailable from 1 May till 4 May
16:40:06 <kozhukalov> moving on
16:40:23 <kozhukalov> #topic Testing in Puppet modles (xarses)
16:40:31 <xarses> So we have all these seperate tasks, and some of them even have pre and post tests. I'd like to understand how/when these tests are run.
16:41:04 <Tatyanka_Leontov> we use it in integration gd based test
16:41:32 <xarses> gd?
16:41:48 <Tatyanka_Leontov> granular deployment :)
16:42:06 <kozhukalov> ohh, great abbreviation
16:42:13 <kozhukalov> ibp, gd
16:42:19 <kozhukalov> what else?
16:42:29 <xarses> do they run automaticly with every deployment, or is there a specific way to enable them?
16:42:50 <Tatyanka_Leontov> we plan to execute it on FUEL-CI for each review, but for now integration with FUEL-CI is in progress
16:43:05 <mihgen> if let's say I changed keystone, will Fuel CI run only for keystone?
16:43:42 <Tatyanka_Leontov> yep, absolutely
16:44:15 <Tatyanka_Leontov> I'll write the announcement with instruction as soon as successful integrated in CI
16:44:39 <xarses> the pre/post tests, do they run automaticly with every deployment now, or is there a way to enable them to do so?
16:45:13 <xarses> also can we make it (or will it outright) stop the deployment when it fails
16:46:07 <xarses> I ask, because I think we spend alot of time with snapshots trying to find out which component failed, leading to wasted time. It would be awesome to see a score card or something of the graph, and where it failed
16:46:16 <Tatyanka_Leontov> xarses:  for now we use it only in those integration tests, but  seems we can use it everywhere
16:46:19 <xarses> somewhat similar to what we have for fuel-web
16:47:03 <bookwar> mihgen: not _only_ keystone, but _additionally_ keystone
16:48:02 <mihgen> bookwar: this is not the way we want it with xarses )
16:48:11 <mihgen> we want to make the CI cycle shorter
16:48:28 <bookwar> mihgen: i think we all need our ci cycle to be reliable
16:48:29 <mihgen> so if you changed keystone, let's run tests against keystone only
16:48:56 <kozhukalov> mihgen: +1 sounds rational
16:49:10 <mihgen> that's how it was designed at the very beginning when we had chef cookbooks
16:49:15 <bookwar> we should have default set of tests running in any case, with additional tests when we change certain particular subsystem
16:49:48 <mihgen> why to have tests for anything if we changed just 1 component?
16:50:01 <mihgen> it's integration test, not system
16:50:01 <kozhukalov> mihgen: yes, that was the time when we were young and romantic )
16:50:13 <bookwar> because we can not be 100% sure that it is the only one component affected
16:50:34 <mihgen> kozhukalov: yeah now I have beard )
16:50:52 <mihgen> bookwar: nope, you can easily check it
16:51:02 <aglarendil> folks, this depends on how you specify dependencies
16:51:11 <mihgen> for keystone, you've got to have mysql + may be memcached
16:51:17 <aglarendil> obviously change in keystone may affect the whole openstack
16:51:18 <kozhukalov> mihgen: and i have a daughter, i won )
16:51:29 <mihgen> then you run pre-test ensure everything is ready
16:51:43 <mihgen> then you deploy keystone, then you run simple tests against it
16:51:49 <mihgen> ensure that all endpoints are created
16:52:09 <mihgen> so it will cover 95% of keystone cases with a few REST requests
16:52:09 <bookwar> mihgen: how you define 'all' ?
16:52:24 <mihgen> no need to run full deployment to understand that keystone works
16:52:34 <mihgen> all?
16:52:54 <aglarendil> mihgen: and then you merge the code and understand that neutron os broken and your whole team is blocked
16:53:24 <aglarendil> mihgen: you can run keystone as a quick test to ensure keystone is working, but you will need to run the whole test like BVT before merging it into the master
16:53:25 <mihgen> you've got to have better coverage for keystone
16:54:01 <aglarendil> you can change API versions that your keystone supports and magically understand that your neutron-server has issues with it
16:54:06 <mihgen> aglarendil: if you insist, can be done this way, still better then to run Fuel CI agaisnst every patchset
16:54:10 <aglarendil> boom! all is broken
16:54:22 <aglarendil> mihgen: well, I am for running hierarchy of tests
16:54:24 <aglarendil> 1) unit
16:54:25 <mihgen> aglarendil: you've got to write good tests for keystone ;)
16:54:26 <aglarendil> 2) noop
16:54:28 <aglarendil> 3) granule test
16:54:51 <aglarendil> you cannot write tests for keystone with neutron :-) sorry )
16:54:59 <alex_didenko> 4) integration tests (deployments)
16:55:01 <aglarendil> this is a clash of ares
16:55:07 <aglarendil> yep, Alex, thx
16:55:21 <xarses> aglarendil: it dosn't mean that we should stop running full intergration tests
16:55:23 <aglarendil> and run each next step only if previous ones succeeded
16:55:44 <xarses> it does mean that we need to spend less time running intergration tests just to find out that our changes are broken
16:55:57 <kozhukalov> 4 minutes
16:56:01 <aglarendil> xarses: I agree and I suggest to run tests in hierarchy
16:56:03 <xarses> intergration tests are slow, and expensive to run
16:56:26 <aglarendil> yep, so run keystone quickly, ensure that everything works, switch to integration - that's fine
16:56:28 <alex_didenko> +100 for CI jobs hieararchy
16:56:31 <bookwar> i am all for reorganizing the tests workflow, but definitely againsts switching of default integrations tests and let just developer decide what he wants to test with this patchset. We can do any number of custom tests as a separate service for developers, but we shouldn't have CI without the proper base line
16:56:58 <aglarendil> +1 to bookwar + 1 to granular tests +1 to peace and love on the Earth
16:57:23 <xarses> -1 to peace, otherwise you are good
16:57:25 <mihgen> no heavy tests against every patchset please
16:57:37 <mihgen> may be only for merge gate
16:57:49 <mihgen> I hope it can be configured
16:58:14 <aglarendil> Mike, we just need to increase deployment speed, e.g. parallelize deployment tasks with advanced orchestrator
16:58:16 <xarses> mihgen: that would be a good compromise, however we have a lot of intergration failures currently
16:58:22 <angdraug> you want merge gate tests to be heavier than patch set tests?
16:58:25 <aglarendil> if the whole deployment takes 20 minutes, all our issues will be gone
16:58:26 <angdraug> that's even worse
16:58:32 <angdraug> aglarendil: +1
16:58:53 <xarses> maybe for now, we dont start the intergration test untill the granular test completes
16:59:02 <xarses> we will get results faster
16:59:08 <xarses> aglarendil: +100
16:59:08 <aglarendil> as a compromise we can run a batch of tests at once
16:59:16 <angdraug> at least we will get negative results faster
16:59:20 <aglarendil> and merge them if everything is ok
16:59:22 <mihgen> I'm not so sure if we can get 20min deployment, but let's do if we can
16:59:42 <angdraug> time
16:59:45 <aglarendil> mihgen: if we invest in our granular deployment this release cycle, this should become possible
16:59:45 <kozhukalov> looks like it is good time to end the meeting )
17:00:02 <kozhukalov> thanx everyone for attending
17:00:02 <mihgen> aglarendil: we will
17:00:07 <kozhukalov> great meeting
17:00:09 <kozhukalov> ending
17:00:13 <mihgen> thx all
17:00:19 <kozhukalov> #endmeeting