15:00:03 <bswartz> #startmeeting manila
15:00:04 <openstack> Meeting started Thu Sep 17 15:00:03 2015 UTC and is due to finish in 60 minutes.  The chair is bswartz. Information about MeetBot at http://wiki.debian.org/MeetBot.
15:00:06 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
15:00:09 <openstack> The meeting name has been set to 'manila'
15:00:14 <bswartz> hello all
15:00:18 <zhongjun2> hi
15:00:23 <lpetrut> hi
15:00:30 <xyang1> Hi
15:00:31 <kaisers_> Hi
15:00:32 <ganso> hi
15:00:37 <markstur_> hi
15:00:47 <vponomaryov> hello
15:00:48 <mmartin78> hi
15:01:02 <rraja> hi
15:01:14 <bswartz> #agenda https://wiki.openstack.org/wiki/Manila/Meetings
15:01:24 <csaba> hi
15:01:24 <dustins> \o
15:01:25 <bswartz> okay 3 topics for today
15:01:46 <bswartz> #topic 3rd-party CI status
15:02:16 <bswartz> first of all, I'm sorry that 2 drivers were removed for not having a reporting CI system -- HDS SOP and IBM GPFS
15:02:43 <jasonsb> i share that
15:02:44 <bswartz> HDS has said they're not working on a CI system, but IBM is, so we're hopeful that we can add the GPFS driver back in
15:03:14 <cFouts> hi
15:03:42 <bswartz> the deadline next week is that CI systems should be passing unless there is a real bug in the patch
15:03:45 <tbarron> hi
15:04:18 <bswartz> that deadline is 24 Sept according to the mail thread at the beginning of Liberty
15:04:30 <bswartz> #link http://ec2-54-67-102-119.us-west-1.compute.amazonaws.com:5000/?project=openstack/manila&user=&timeframe=72&start=&end=&page_size=500
15:04:34 <bswartz> #link http://ci-watch.tintri.com/project?project=manila
15:04:54 <bswartz> when I look here I see some systems *consistently* failing
15:05:22 <bswartz> both glusterfs drivers have all failures
15:05:33 <kaisers_> Yep, ours. Have issues with switching to the tempest plugin based tests
15:05:38 <bswartz> Quobyte is all failures
15:05:52 <vponomaryov> kaisers_: do you need help?
15:05:57 <kaisers_> Yep
15:06:01 <bswartz> A few others look unhealthy but at least I see some successes
15:06:45 <bswartz> the other thing I see is that some systems are not always reporting a result
15:06:57 <kaisers_> Today and tomorrow I'm mobile but starting Sunday I'll be all Manila ci.
15:07:30 <bswartz> I was planning to post some patches in the coming week to validate that CI systems vote green on good patches and red on bad patches, and I'm not sure I'll be able to do that if CI systems are skipping whole patchsets
15:08:10 <bswartz> markstur_: why is "HP Cinder CI" voting on manila patches?
15:08:21 <bswartz> was that just a glitch?
15:08:24 <markstur_> some rename happened.  It is fixed now.
15:08:28 <bswartz> okay
15:09:13 <bswartz> well I hope that by next week the ones that are unstable are made more stable
15:09:24 <bswartz> I'll email individual maintainers if I have concerns
15:09:36 <kaisers_> Does this skipping issue include systems waiting for Jenkins +1? Or has that been ok'ed?
15:09:40 <bswartz> but the important thing is that quobyte and glusterfs fix whatever it making them fail always
15:09:58 <bswartz> kaisers_: that explains some but not all of the missing reports
15:10:14 <bswartz> I'll make sure my test patches do get at +1 from jenkins
15:10:29 <kaisers_> Bswartz: ok, thnks
15:10:51 <lpetrut> We had some issues on the Windows CI with zuul yesterday, which stopped triggering jobs. That was fixed today
15:10:52 <kaisers_> And understood regarding the deadline
15:11:01 <bswartz> okay
15:11:09 <bswartz> we understand that CI systems are living systems that break from time to time
15:11:23 <rraja> bswartz: we need to fix our (glusterfs) devstack-plugin. and we've been busy fixing critical bugs in our drivers. the glusterfs-CIs should be up before the deadline.
15:11:36 <bswartz> your job is to try to establish a good record of voting correctly, and we forgive the occasional glitches
15:11:59 <bswartz> we don't forgive consistent failures
15:12:05 <bswartz> so please fix those
15:12:15 <bswartz> any questions about 3rd party CI?
15:12:32 <vponomaryov> bswartz: yes
15:12:33 <markstur_> maybe mention the recheck-  name
15:12:38 <bswartz> oh yes
15:13:03 <bswartz> huawei had accidentally use the string "recheck huawei" to trigger their system, but that string also triggers jenkins
15:13:14 <vponomaryov> bswartz: you said that you will also make changes to verify that CIs fail when expected, but jenkins can fail too and Third-Party CIs will not even start running in that case
15:13:52 <bswartz> we prefer a string like "recheck-huawei" (with the dash it won't trigger jenkins) or "run huawei-ci" or any other string that doesn't trigger jenkins
15:14:03 <markstur_> recheck-x also triggers
15:14:11 <kaisers_> I switched Quobyte CI today.  Our other CI's follow up on recheck -> run with the next restart.
15:14:31 <bswartz> vponomaryov: for negative tests I'll create bugs in the drivers themselves that won't cause jenkins to fail
15:14:33 <zhongjun2> Yep, We will pay attention to that.
15:14:35 <markstur_> We're switching hp also. the recheck- was recommended, but is a bad idea.  Use "run X"
15:14:48 <bswartz> markstur_: okay
15:14:52 <zhongjun2> "run huawei-ci"
15:15:09 <bswartz> well just make sure your trigger doesn't also trigger other systems, otherwise it wastes resources
15:15:40 <bswartz> "run <vendor>-ci" is known to be safe and netapp uses it
15:15:58 <bswartz> any other quesitons about CI before we move on?
15:16:04 <vponomaryov> bswartz: is it safe to not run all third-party CIs?
15:16:14 <bswartz> vponomaryov: ?
15:16:19 <vponomaryov> "run <vendor>-ci"
15:16:38 <bswartz> the idea is that you can trigger each system individually
15:16:55 <bswartz> "run netapp-ci" will rerun netapp, "run huawei-ci" will rerun huawei
15:16:58 <vponomaryov> bswartz: word "run" will not trigger by itself, right?
15:17:05 <bswartz> no it should not
15:17:08 <vponomaryov> ok
15:17:24 <bswartz> as I said, the trigger should be unique to your system so nobody accidentally launches your CI
15:17:57 <bswartz> #topic RC1 status
15:18:13 <bswartz> #link https://launchpad.net/manila/+milestone/liberty-rc1
15:18:27 <bswartz> we've got 6 bugs left
15:18:55 <bswartz> at this point I'm going to start retargeting bugs out of liberty if there's no fix in review
15:19:27 <bswartz> I expect to make the RC early next week
15:19:32 <bswartz> ideally on Monday
15:19:59 <bswartz> also, we'll be releasing python-manilaclient and manila-ui for liberty too
15:20:25 <jasonsb> should manilaclient master work against kilo manila?
15:20:31 <jasonsb> (it gave me trouble)
15:20:47 <bswartz> jasonsb: it should, but I think you have to pass some extra parameters
15:21:03 <jasonsb> ok, i'll ask on regular irc
15:21:13 <vponomaryov> jasonsb: latest manilaclient uses new endpoint
15:21:19 <vponomaryov> jasonsb: so, need to switch to old one
15:21:20 <jasonsb> nod
15:21:21 <ganso> bswartz: can a bug still be opened today, and be fixed by tomorrow 23:59 UTC?
15:21:21 <bswartz> right now it's safest to use a version of the client from kilo to talk to a kilo server
15:21:28 <bswartz> ganso: !!!
15:21:46 <bswartz> ganso: only if it's critical
15:22:31 <bswartz> so a reminder to cores -- you should only be workflowing code changes linked to bugs targetted at liberty-rc1 now (workflowing docs changes is also okay)
15:22:51 <bswartz> every change must have a bug and the bug must be targetted
15:23:05 <bswartz> otherwise it waits until mitaka
15:23:10 <toabctl> ack
15:23:24 <vponomaryov> bswartz: manila-ui has no RC1 milestone
15:23:49 <vponomaryov> same about manilaclient
15:23:49 <bswartz> vponomaryov: yeah in launchpad we're missing milestones -- just look at the series
15:24:04 <bswartz> technically libraries don't have RCs
15:24:11 <bswartz> they just have releases and stable branches
15:24:19 <vponomaryov> then it is related only to Manila project?
15:24:29 <bswartz> there will be releases tomorrow and stable branches branched from those releases
15:24:41 <bswartz> manila's stable branch will be branched at the RC
15:25:28 <bswartz> vponomaryov: correct, but for the library projects we still need to aim for stability
15:25:57 <bswartz> we won't be backporting features from mitaka to liberty under any circumstances
15:26:11 <bswartz> we can backport bugfixes if they are serious
15:27:04 <bswartz> any questions about RC1 and library releases?
15:27:45 <bswartz> #topic Support for infinite shares in Manila
15:28:02 <bswartz> mmartin78: you're up
15:28:10 <bswartz> also welcome mmartin78
15:28:53 * bswartz wonders if mmartin78 is still here.....
15:29:19 <vponomaryov> bswartz: he was asking yesterday in manila channel
15:29:30 <bswartz> This topic is new to me, it sounds like a suggestion for a new Mitaka feature
15:29:34 <vponomaryov> bswartz: about feature "share wtihout size"
15:29:53 <vponomaryov> bswartz: that it will be limited only by quota
15:30:02 <mmartin78> yes, here
15:30:11 <bswartz> it sounds like an interesting idea, but it would violate a number of our existing designs
15:30:41 <mmartin78> this is actually supported by commercial offering like Amazon EFS
15:30:51 <bswartz> vponomaryov: if it's limited by quota then it's not really infinite is it?
15:31:00 <kaisers_> Interesting for us, too
15:31:08 <vponomaryov> bswartz: quota can be infinite
15:31:12 <mmartin78> well, there can optionally be a quota
15:31:13 <vponomaryov> bswartz: just set -1
15:31:20 <mmartin78> right
15:31:27 <bswartz> so how do you prevent abuse in that case?
15:32:00 <bswartz> dd if=/dev/urandom of=/manila/share/huge_file
15:32:15 <kaisers_> If there's risk of abuse use quota. Don need that otherwise...
15:32:34 <mmartin78> right
15:32:50 <kaisers_> Quobyte works like that.
15:33:16 <mmartin78> so what are the limitations of manila to support this? I believe the first is that size of share is a required parameter in the create share API
15:33:24 <bswartz> Amazon relies on their billing model to prevent abuse
15:33:41 <kaisers_> :-)
15:34:02 <mmartin78> that is also a valid method of preventing abuse
15:34:03 <bswartz> manila doesn't have a unified model for billing -- deployers have use their own methods to charge users for consumption
15:34:21 <bswartz> thus we have quotas and share sizes that count against those quotas
15:34:48 <bswartz> I'm open to the concept of an infinite share -- we just need to figure out how big the change would be
15:34:52 <mmartin78> should manila even be concerned about preventing abuse though in this case?
15:35:06 <bswartz> we'd need to modify the database, the quota system, the APIs, the driver, and maybe more
15:35:31 <mmartin78> seems like the scheduler as well, right?
15:35:40 <vponomaryov> right
15:35:43 <bswartz> mmartin78: it's our job to give deployers tools to use -- deployers obvious want to prevent abuse of their systems, although different deployers may choose different methods
15:35:51 <vponomaryov> capacity filter affected
15:36:15 <vponomaryov> mmartin78: are you aware about "extend" feature?
15:36:34 <mmartin78> they could use quota to prevent abuse as suggested
15:36:37 <bswartz> this is just something that needs a more concrete proposal
15:36:39 <mmartin78> or billing model
15:36:52 <mmartin78> yes I am aware
15:36:59 <bswartz> I don't hear anyone saying this is a bad idea, it's just a significant change
15:37:14 <vponomaryov> mmartin78: it does not satisfy use cases that "infinite" one satisfies?
15:37:15 <bswartz> mmartin78: are you interested in designing and implementing this feature for Mitaka?
15:37:33 <xyang1> We just got rid of 'infinite' for capacity reporting
15:37:43 <xyang1> Looks like this may add that back
15:37:50 <bswartz> xyang1: yeah I think this is different
15:38:00 <vponomaryov> xyang1: not really, in this case we say "not limited"
15:38:03 <mmartin78> we might be interested if there is support for it
15:38:11 <vponomaryov> xyang1: and skip capasity filtering
15:38:11 <bswartz> obviously no system can store infinite data, but I think the feature is "unbounded" share size
15:38:17 <mmartin78> just wanted to bring it up first
15:38:37 <bswartz> but you can still run into physical system limitations even with unbounded share size
15:38:45 <mmartin78> right unbounded
15:39:00 <vponomaryov> bswartz: we can fit our logic if we support value "-1" for share size
15:39:18 <kaisers_> Gotta run, thanks & bye
15:39:18 <bswartz> vponomaryov: that will require a lot of special case code
15:39:36 <vponomaryov> bswartz: it will keep APi consistent
15:39:37 <xyang1> This will still need to go thru scheduler to find the right backend though
15:40:00 <vponomaryov> scheduler is uner-the-hood-thing, it is ok to update it
15:40:10 <vponomaryov> s/uner/under/
15:40:24 <xyang1> Not saying we cannot update
15:40:24 <bswartz> so if "unbounded" shares is going to happen, someone needs to design how to implement it (actually figure out what parts of the code will need changing) and make the case for what problems it solves that can't be solved with out existing quotas, and the ability to resize shares
15:40:32 <mmartin78> would it be a problem to make share size a non required parameter?
15:41:09 <vponomaryov> mmartin78: bigger than allow value "-1"
15:41:17 <bswartz> then we can add it to the list for mitaka features
15:41:30 * bswartz dislikes negative sizes in the rest api
15:41:42 <mmartin78> bswartz: we could actually take that on
15:41:43 <bswartz> better to change the rest api than allow nonsense values
15:42:04 <bswartz> microversions allows us to gracefully evolve the rest api going forward
15:42:27 <mmartin78> even if it is more change I would change the API as well
15:42:35 <bswartz> side note: there's still a bunch of work to be done on the client side to make microversions easy to use
15:42:39 <mmartin78> not allow nonsense values
15:43:18 <bswartz> anyways my point above was that this feature needs an owner to drive it
15:43:28 <mmartin78> so it seems like the community would be open to this feature
15:43:36 <bswartz> ideally someone who will be in Tokyo and can present the design
15:43:41 <mmartin78> we are open to taking ownership
15:44:05 <vponomaryov> mmartin78: please, clarify who "we"?
15:44:08 <bswartz> mmartin78: yes I think we're open to it, as long as there's a good use case and the code change doesn't introduce a bunch of risk
15:44:16 <mmartin78> still not sure if will be able to tackle this for Mitaka or beyond
15:45:19 <bswartz> mmartin78: make sure to write up a design first and we can review that before you get too deep into implementation
15:45:34 <mmartin78> bswartz: ok fair enough, so we will try to put together something for Tokyo
15:45:34 <bswartz> just a wiki page is probably fine
15:45:38 <bswartz> awesome
15:45:44 <bswartz> speaking of tokyo...
15:45:49 <bswartz> #topic Tokyo
15:46:09 <bswartz> Manila has 2 fishbowl sessions, 4 working sessions, and 1 half-day meetup in tokyo
15:46:15 <mmartin78> otherwise, it might be for after Tokyo, still sizing our resources
15:46:30 <bswartz> in the next few weeks we're doing to decide how to use those sessions so I'll be looking for proposals
15:46:56 <bswartz> #topic open discussion
15:47:03 <bswartz> okay anything else from anybody?
15:47:15 <bswartz> mmartin78: I think vponomaryov was asking who do you work for
15:47:27 <mmartin78> IBM SoftLayer
15:47:41 <vponomaryov> thanks
15:47:49 <bswartz> okay welcome again
15:48:02 <mmartin78> thanks!
15:48:05 <bswartz> it sounds like there are no other topics for today
15:48:37 <bswartz> thanks everyone, let's get the last few bugs fixed and get your CI system working if it's not already!
15:48:49 <bswartz> #endmeeting