16:00:10 <smcginnis> #startmeeting Cinder
16:00:11 <openstack> Meeting started Wed Feb  1 16:00:10 2017 UTC and is due to finish in 60 minutes.  The chair is smcginnis. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:12 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:14 <openstack> The meeting name has been set to 'cinder'
16:00:19 <eharney> hi
16:00:23 <pots> o/
16:00:24 <dulek> o/
16:00:33 <smcginnis> ping dulek duncant eharney geguileo winston-d e0ne jungleboyj jgriffith thingee smcginnis hemna xyang1 tbarron scottda erlon rhedlind jbernard _alastor_ bluex karthikp_ patrickeast dongwenjuan JaniceLee cFouts Thelo vivekd adrianofr mtanino yuriy_n17 karlamrhein diablo_rojo jay.xu jgregor baumann rajinir wilson-l reduxio wanghao thrawn01 chris_morrell stevemar watanabe.isao,tommylikehu
16:00:39 <smcginnis> mdovgal ildikov wxy viks ketonne
16:00:40 <erlon> hey
16:00:45 <hemna> yough
16:00:45 <mdovgal> Hi!
16:00:46 <xyang1> hi
16:00:47 <scottda> hi
16:00:48 * bswartz wanders into the room
16:01:02 <patrickeast> Hey
16:01:20 <jungleboyj> o/
16:01:37 <smcginnis> #topic Announcements
16:02:01 <breitz> hi
16:02:05 <smcginnis> RC-1 is tomorrow. We should try to wrap up any important bugs today if at all possible.
16:02:25 <smcginnis> After RC-1 we will really need to limit what, if anything, we allow in.
16:02:35 <smcginnis> Critical only, ideally.
16:02:36 <tbarron> hi
16:03:16 <smcginnis> I need to start reworking the focus etherpad to start prepping for Pike, so I won't link that here for now.
16:03:22 <dulek> smcginnis: master opens for Pike once RC-1 is tagged, right?
16:03:54 <smcginnis> dulek: Correct. I need to branch stable/ocata when we tag RC-1. So once that is done, that means master will now be Pike.
16:04:24 <smcginnis> So just hold off on anything until you see that stable/ocata is actually created. ;)
16:04:42 <Swanson> Hi
16:04:45 <smcginnis> #link http://www.openstack.org/ptg PTG info and registration
16:04:57 <e0ne> рш
16:04:58 <e0ne> hi
16:05:03 <smcginnis> I think it was mentioned there are only 18 spots left for the PTG as of this morning.
16:05:26 <smcginnis> If you've been waiting on that, better get moving.
16:05:47 <smcginnis> #link https://etherpad.openstack.org/p/ATL-cinder-ptg-planning PTG topic planning
16:05:58 * dulek finally got his flight tickets for PTG. Wheee! :)
16:06:00 <smcginnis> Add any ideas to scottda's list of topics. ^^
16:06:04 <smcginnis> dulek: Awesome!
16:06:05 <scottda> ha
16:06:09 <smcginnis> :)
16:06:41 <smcginnis> I'll try to start arranging proposed topics into a logical-ish order soon so we can have a list to work through at the PTG.
16:07:17 <smcginnis> #link https://www.openstack.org/summit/boston-2017/call-for-presentations/ Summit CFP
16:07:35 <smcginnis> Less than a week left to submit talk proposals for the Summit.
16:09:02 <smcginnis> I guess one final announcement item - PTL nomination period closed and I was the only one. So y'all stuck with me again. :)
16:09:24 <scottda> Congratulations, Great Leader!
16:09:33 <smcginnis> Hah, thanks.
16:09:41 <smcginnis> #topic Consistent Versioned Endpoints in the Service Catalog
16:09:49 <smcginnis> scottda: OK, all yours.
16:10:08 <scottda> So, we'll be talking about this Mon or tues at the PTG
16:10:21 <scottda> Ops have complained about the service catalog being inconsistent...
16:10:28 <scottda> and cinder is a good example.
16:10:43 <scottda> I.e. that we have 'volume', 'volumev2', 'volumev3'
16:10:50 <scottda> #link http://lists.openstack.org/pipermail/openstack-dev/2017-January/110043.html
16:10:50 <jungleboyj> smcginnis, Thanks for continuing to be our fearless leader!
16:11:18 <scottda> I'm not sure anything can be done to change things. That might break a lot of code and scripts.
16:11:39 <scottda> But we'll be talking about it. So either attend, or let me know opinions so I can represent them.
16:11:54 <smcginnis> scottda: Would be great if we could just have 'volume' and figure out from there where to go.
16:12:02 <scottda> smcginnis: +10000
16:12:08 <ildikov> smcginnis: +1
16:12:10 <scottda> Yes, volume could be the naked endpoint:
16:12:19 <scottda> http://<url>:<port>
16:12:26 <scottda> and you get version info from there.
16:12:34 <e0ne> do we have working version discovery in cinderclient?
16:12:37 <scottda> And I think that ops and API WG folks like that idea.
16:12:37 <erlon> smcginnis: +1
16:12:50 <scottda> e0ne: yes
16:13:01 <e0ne> scottda: cool
16:13:04 <scottda> both CLI and a static method that takes a URL and no auth
16:13:24 <scottda> #link https://review.openstack.org/#/c/420119
16:13:46 <scottda> But, I'm sure there will be debate about how to change things, deprecation, etc.
16:13:58 <scottda> Please attend, or voice strong opinions somehow.
16:14:21 <scottda> Otherwise, they'll get my opinion.
16:14:27 <scottda> That's it for this topic.
16:14:53 <smcginnis> scottda: OK, thanks!
16:15:06 <smcginnis> I think if we can work through migration issues, it would get us in a better place.
16:15:11 <bswartz> scottda: I'll be there and full of opinions
16:15:25 <scottda> bswartz: Good to hear. Maybe after the meeting we can chat.
16:15:47 <smcginnis> I'd love to revisit removing of v1 at some point too. But that's another big discussion for another time.
16:16:03 <smcginnis> #topic Storing cinder version info and client caching in nova
16:16:06 <scottda> smcginnis: Yes, but we'll likley touch on that..
16:16:12 <scottda> OK, next topic...
16:16:18 <scottda> #link https://review.openstack.org/#/c/420201/
16:16:42 <scottda> I've a POC for how Nova can get cinder server (and client) info , and use for new APIs for attach/detach
16:17:10 <scottda> One question is: Do we store that info, or get it dynamically for each call to the cinderclient?
16:17:32 <scottda> Another question is: Do we really need to instantiate a client for each call from nova -> cinder?
16:17:40 <scottda> We currently do this 4 times for a volume attach.
16:17:43 <hemna> scottda, where would the client even store it ?
16:17:57 <smcginnis> scottda: Seems inefficient.
16:18:00 <scottda> hemna: See my patch. It's clunky, but uses a Global
16:18:14 <scottda> stored in nova/cinder.py, not on the client
16:18:17 <hemna> between invocations?
16:18:22 <scottda> nova gets the version info, and puts it into a global
16:18:31 <hemna> oh.  ew ok.
16:18:32 <scottda> ONce that is set, Nova can use that in manager.py code
16:18:34 <jungleboyj> scottda, Wow, 4 times seems excessive.
16:18:50 <smcginnis> Issue there would be what if the service gets upgraded. So Nova would have to get restarted to recognize Cinder changed.
16:18:59 <scottda> Other choice is to query the cinder server for each nova call that goes through cinderclient to cinder...
16:19:07 <bswartz> instantiating a object should be trivially cheap though -- what is the expensive part?
16:19:13 <scottda> smcginnis: Yes, that's the issue. But how big of an issue is it?
16:19:31 <scottda> bswartz: Expensive would be pinging cinder server each call to get version info
16:19:38 <bswartz> why would we do that?
16:19:59 <bswartz> just send the API request with the highest version you know...
16:20:05 <scottda> bswartz: https://review.openstack.org/#/c/420201/7/nova/compute/manager.py
16:20:16 <dulek> If the Cinder microversion was reported in the service catalog, it would be better - we're calling Keystone anyway.
16:20:20 <scottda> because we've new APIs for nova to use with cinder for volume attach...
16:20:25 <bswartz> this is the whole point I was trying to make about version APIs
16:20:31 <scottda> But nova code won't know at run time which version of cinder exists.
16:20:42 <bswartz> you never query the server for versions until after something has failed
16:20:58 <bswartz> you send the request, expect success, and deal with failure by negotiating down
16:21:07 <scottda> bswartz: maybe
16:21:13 <scottda> not sure Nova wants to do it that way
16:21:22 <bswartz> as an optimization, you cache the server version after negotiating down to avoid repeat failures
16:21:23 <dulek> Maybe it should be in nova.conf? It's admin who knows what Cinder version is in the deployment.
16:21:31 <e0ne> bswartz: +1 on such solution
16:21:54 <bswartz> but caching the version is dangerous because the server could upgrade without telling you
16:21:55 <smcginnis> Kind of more "pythonic" to try and fail, then failback to the old way I guess.
16:22:04 <scottda> bswartz: if you're going to cache the server version, why not just get it upon first query and use it?
16:22:14 <bswartz> scottda: because it's slow as hell
16:22:16 <scottda> I'm not personally married to any particular solution...
16:22:34 <bswartz> and the negotiate down case should be rare
16:22:39 <scottda> bswartz: Not sure it's any more round trips that the try- fail-try again
16:22:46 <bswartz> you always want the common case to be fast and the rare case to be slow
16:23:10 <bswartz> it's same in the fail case but more round trips in the happy case
16:23:24 <scottda> bswartz: Yeah, it's maybe the best. Part of this conversation is to have another conversation about the same subject with Nova
16:23:33 <bswartz> and in this specific case you're worried about it happening 4 times, which is easily avoidable
16:23:37 <e0ne> FYI,  cinder's patch "Make Nova API version configurable" https://review.openstack.org/#/c/302177/
16:23:38 <scottda> Since they will ultimately decide what goes in their code.
16:24:42 <scottda> OK, well, I'm wanting to solicit ideas.. We'll likely discuss at the next nova-cinder api meeting...
16:25:02 <bswartz> I'm worried that microversions are getting a bad name because people are deciding use to them wrong
16:25:04 <scottda> tomorrow, Thurs, at 1700 UTC
16:25:28 <scottda> bswartz: Well, I think they have a bad name. And it's a matter of education and information.
16:25:34 <smcginnis> +1
16:25:40 <bswartz> indeed
16:25:47 <scottda> If you just try..except for everything, why do you need microversions?
16:25:54 <bswartz> I can only educate and inform so much
16:26:04 <scottda> just see if the feature exists, and then fall back to the older way.
16:26:17 <smcginnis> I think in this case though, it's not limited to just one call and failing back. It kind of dictates the whole workflow being done.
16:26:36 <scottda> smcginnis: That's right. There will be a lot of code in Nova for just attach and detach...
16:26:41 <smcginnis> So there probably should be some caching or something to know for the whole "transaction".
16:26:56 <scottda> And we might be adding more in the future for special cases like migration, shelve offload, etc.
16:27:40 <scottda> OK, well, I want to solicit opinions, and welcome people to join in.
16:27:52 <scottda> ildikov: Did I get the meeting time right for nova-cinder api talk?
16:28:10 <smcginnis> ildikov: Thursday's, same time, right?
16:28:17 <bswartz> what channel is that meeting
16:28:38 <ildikov> scottda: smcginnis: yes, Thursdays, 1700UTC, #openstack-meeting-cp
16:28:45 <smcginnis> ildikov: Thanks!
16:28:57 <scottda> cool. That's it for me.
16:29:01 <smcginnis> scottda: Thanks again.
16:29:06 <ildikov> scottda: smcginnis: it's also registered now on eavesdrop, so it's official :)
16:29:12 <scottda> I'll shut up now
16:29:14 <jungleboyj> Oooh!
16:29:15 <smcginnis> ildikov: Ooh, nice.
16:29:17 <smcginnis> scottda: Hah
16:29:25 <smcginnis> #topic Open Discussion
16:29:34 <smcginnis> THat was it on the agenda. Anything else today?
16:29:35 <jungleboyj> ildikov, Nice. Now the issues will get resolved then, right?  ;-)
16:29:37 <ildikov> scottda: smcginnis: :)
16:29:48 <ildikov> jungleboyj: what issues? ;)
16:29:52 <smcginnis> Or we can get 30 minutes to finalize bug fixes before the RC. :)
16:30:01 <hemna> smcginnis, I wanted to raise the question of marking drivers unsupported.
16:30:04 <jungleboyj> ildikov, Indeed.  ;-)
16:30:16 <smcginnis> hemna: Oh, good topic I guess. Especially given the timing.
16:30:26 <hemna> I'm re-running the latest report right now
16:30:37 <jungleboyj> And here goes the next 30 minutes.
16:31:00 <hemna> I've spent some time reworking the lastcomment.py tool to output some more information that gives us a better idea of the jobs for each CI
16:31:24 <smcginnis> Last one hemna ran until he gets current results: http://paste.openstack.org/show/597113/
16:31:41 <hemna> http://paste.openstack.org/show/597227/
16:31:44 <hemna> that one just finished
16:32:10 <hemna> it details every job in the CI and if the success rate is <=60% it shows the last success run for the job as well
16:32:31 <hemna> do we care at this point to mark failing drivers as unsupported?
16:32:42 <hemna> some are low%
16:32:59 <smcginnis> hemna: I think with timing I would rather mark them unsupported as soon as Pike opens up.
16:33:01 <hemna> 36%
16:33:16 <hemna> some are 0%
16:33:27 <smcginnis> Though it can be argued we are going out with drivers supported that do not meet CI requirements at this point. :{
16:33:39 <hemna> yah, that's why I wanted to raise that now
16:33:48 <hemna> clearly some CI's are not working
16:34:31 <hemna> *crickets*
16:34:35 <xyang2> hemna: whatever we decide, I think we should send out an email to the mailing list, describe what exactly is the criteria for marking a driver unsupported
16:34:44 <smcginnis> I'm not as concerned about passing percentage (at least at this point) as the ones that haven't even reported for weeks.
16:34:50 <hemna> xyang2 we've done that every time we've met
16:35:05 <xyang2> hemna: give driver maintainers the tools you used so they can run themselves
16:35:11 <smcginnis> xyang2: We have and we've published it on our wiki. I think we said two weeks?
16:35:18 <hemna> I'm kinda of the mindset that we have been the nice guy for a while now to get people used to keeping the CI up
16:35:28 <xyang2> smcginnis: right , two weeks
16:35:37 <jungleboyj> smcginnis, I think we should talk about what the acceptable percentage is at the PTG and clearly state that before doing anything right now.
16:35:41 <hemna> we either care about the CI failing and use the unsupported flag or...not
16:35:47 <xyang2> smcginnis, hemna: I don't remember we published something based on percentage
16:36:08 <smcginnis> jungleboyj: Yes, before we base anything on percentage we should discuss and publish it.
16:36:12 <hemna> we said 50% passing in a 2 week period at the last mid cycle
16:36:19 <jungleboyj> Obviously if it isn't running that means it is unsupported.
16:36:36 <jungleboyj> hemna, Did we announce that?
16:36:43 <xyang2> hemna: put that on wiki then
16:36:48 <hemna> yup, I'm sure smcginnis did
16:36:52 <xyang2> hemna: it was not there when I checked last time
16:37:09 <hemna> anyway, we have many CI's far less than 50% now
16:37:31 <xyang2> hemna, smcginnis: I don't remember we agreed on a exact percentage before
16:37:48 <scottda> Seems we need to talk about a way to capture agreements from meetups...that's a topic for the PTG.
16:37:51 <smcginnis> https://wiki.openstack.org/wiki/Cinder/tested-3rdParty-drivers#Non-Compliance_Policy
16:38:02 <smcginnis> Not percentage. Just time based.
16:38:36 <jungleboyj> Doh.
16:38:53 <hemna> ok it's not published, but I do remember smcginnis saying 50% in 2 weeks at the mid cycle
16:38:59 <wiggin15> Goodhart's law: "When a measure becomes a target, it ceases to be a good measure."
16:39:03 <hemna> we have some that are 4%, 14%
16:39:12 <smcginnis> We do have the line "Other issues are found but failed to be addressed in a timely manner"
16:39:18 <hemna> I don't consider those as working
16:39:31 <smcginnis> I think 4% passing is an "other issue".
16:39:51 <smcginnis> The problem now, is if we mark them unsupported today, that gives zero time for issues to be addressed.
16:40:10 <smcginnis> I don't really want to cause issues. I just want a strong incentive to keep CI running.
16:40:29 <hemna> that's what I thought the unsupported flag was
16:40:34 <hemna> instead of removing them from tree
16:40:47 <e0ne> hemna: +1
16:40:51 <hemna> here is a good example
16:40:52 <xyang2> too late for Ocata, already passed O-3, given that we didn't publish the percentage
16:40:55 <hemna> 11% success rate
16:41:00 <hemna> last success was 42 days ago
16:41:07 <smcginnis> Yeah, but it does have a negative impact on end users.
16:41:08 <hemna> and yet we are worried about upsetting them?
16:41:11 <hemna> I don't get it
16:41:21 <smcginnis> More concerned about the folks stuck using their gear.
16:42:06 <hemna> so
16:42:16 <smcginnis> Let's talk about this at the PTG. I think we need to definitely decide on a passing percentage and timeframes for things.
16:42:17 <hemna> can we backport a bug fix to remove the unsupported flag?
16:42:19 <Swanson> Better they know they aren't being supported now?
16:42:26 <bswartz> just because cinder team declares a driver unsupported doesn't mean that an end user can't obtain support from their distro for that driver
16:42:36 <bswartz> that's between them and the distro
16:42:38 <Swanson> bswartz +1
16:42:38 <smcginnis> bswartz: Right.
16:42:42 <hemna> bswartz, +1
16:42:58 <smcginnis> OK, what do folks think. Should we flag these as unsupported now?
16:43:12 <xyang2> I disagree
16:43:20 <scottda> Seems like there's not much warning.
16:43:26 <bswartz> I personally thing the time periods are too short
16:43:31 <patrickeast> Yea... Let's wait
16:43:34 <bswartz> s/thing/think/
16:43:34 <jungleboyj> Well, for future discussion I think we need to plan at what point we are going to do the last check for adding the unsupported flag.
16:43:35 <xyang2> If on the wiki, it clearly says there's a percentage, then that is different
16:43:37 <Swanson> Ocata probably isn't the release to do this with.
16:43:42 <Swanson> Short cycle and all.
16:43:44 <smcginnis> jungleboyj: +1
16:43:49 <bswartz> but if we agreed to them we should change the agreement or enforce it
16:43:51 <xyang2> this criteria is not clearly published
16:43:52 <jungleboyj> That can go with the percentage discussion.
16:43:54 <hemna> fwiw every time we discuss this everyone waffles on it.
16:44:09 <hemna> this was the entire point of creating the unsupported flag vs. removal.
16:44:19 <Swanson> hemna, Everyone becomes terrified and thinks back to the last time their CI broke for a month.
16:44:20 <hemna> now we will ship drivers that haven't had a working CI in 42+ days.
16:44:20 <bswartz> I also think percentages are the wrong measure
16:44:20 <jungleboyj> hemna, But we have successfully moved forward over time.
16:44:23 <smcginnis> hemna: We do have a few marked unsupported that I plan on removing in Pike. I'm just concerned about the timing right now.
16:44:36 <bswartz> but I agree with hemna, we have to enforce what we have, or decide to change the rules
16:44:40 <jungleboyj> smcginnis, ++ Timing ...
16:45:02 <xyang2> yes we should enforce the rule, but the rule did not specify a percentage
16:45:07 <smcginnis> And out of the handful that we marked unsupported, at least half came back and didn't realize things were failing and fixed it. But it took a few weeks for that to happen.
16:45:08 <scottda> But before hemna 's latest scripts, did we have a good way of getting data on CIs running?
16:45:09 <xyang2> so it is not a written rule
16:45:11 <hemna> I'd argue that the timing is perfect for it because the CI's are broken
16:45:13 <e0ne> bswartz. hemna: +1
16:45:16 <scottda> That needs to be consistent and reliable
16:45:18 <hemna> and that's the point of showing that they are unsupported.
16:45:24 <bswartz> making rules and ignoring them is just bad
16:45:30 <Swanson> Of course if a CI is failing at the point we are releasing a product doesn't that generally mean it doesn't work?
16:45:36 <hemna> and that they haven't been around working on it, haven't been around telling us that the CI is broken because of X
16:45:39 <hemna> and not participating.
16:46:03 <xyang2> bswartz: there is rule written in wiki currently but that rule does not say percentage
16:46:10 <bswartz> xyang2: +1
16:46:12 <xyang2> I go by what says on wiki
16:46:21 <hemna> yah it doesn't say it
16:46:32 <hemna> but do you think we should ship a driver that hasn't had a working CI result in 42 days?
16:46:39 <Swanson> hemna +1
16:46:44 <e0ne> hemna: +1
16:46:54 <hemna> or a CI that has a 30% success rate in the same time period?
16:47:08 <smcginnis> I think we keep getting better at our CI policy. Now that we have a good tool to get the data we need, I think at the PTG we can iterate again and make our policy more clear and be in a better position to enforce it.
16:47:40 <hemna> if everyone is ok with shipping drivers marked as supported when they are broken, that's cool.
16:47:40 <bswartz> we ship it, but give it a red mark of shame -- that's the point of "unsupported" right?
16:47:45 <jungleboyj> The incremental improvement has been the key.
16:47:47 <hemna> bswartz, yes
16:47:53 <Swanson> bswartz, +1
16:47:56 * smcginnis looks through the latest output again...
16:48:13 <smcginnis> hemna: Do you know the list of which ones would be affected if we did it now?
16:48:30 <Swanson> Does it matter who?
16:48:38 <hemna> I'd have to call them out by looking at the results
16:48:41 <hemna> I didn't want to do that here.
16:48:41 <smcginnis> Just getting an idea of how many are impacted.
16:48:45 <bswartz> Swanson: +1
16:48:49 * jungleboyj pictures smcginnis pulling out his red marker.
16:48:59 <hemna> http://paste.openstack.org/show/597227/
16:48:59 <xyang2> are there any driver not covered by this output?
16:48:59 <smcginnis> Blockbridge has been 63 days. That should probably get flagged.
16:49:01 <hemna> the data is there.
16:49:45 <hemna> xyang2 http://paste.openstack.org/show/597233/
16:49:52 <hemna> thats the list of CI's I used to run the report
16:50:03 <hemna> I think some of them are dead now
16:50:10 <Swanson> How about we set the bar low for ocata and then wiki up a higher bar for pike?
16:50:21 <hemna> Swanson, +1
16:50:29 <smcginnis> Swanson: That's kind of what I was thinking.
16:50:29 <pots> Swanson +1
16:50:34 <scottda> Swanson: +1
16:50:35 <xyang2> hemna: that list might have missed some drivers
16:50:41 <bswartz> hemna: nice job computing results on a per-jenkins-job basis
16:50:50 <erlon> Swanson: +1
16:50:52 <xyang2> Swanson: +1
16:50:55 <hemna> can we backport a bug fix to remove the flag after O ships?
16:50:59 <smcginnis> But a few of these are pretty bad, so I would be open to flagging a couple of these that are really bad. Two months is a bit extreme.
16:51:15 <hemna> smcginnis, that's why I wanted to raise this.  some are really bad.
16:51:59 <jungleboyj> smcginnis, What about the question of backporting removal of the flag?
16:52:26 <smcginnis> hemna: To be fair, some of these are already flagged, so the list isn't really that big.
16:52:29 <erlon> jungleboyj: that seems a bad move IMO
16:52:34 <hemna> cool
16:52:39 <smcginnis> Or at least not as big as I originally thought.
16:52:39 <Swanson> Or you could just mark them all unsupported and then, if people contact you, take it on a case by case basis. 2 months isn't extreme for someone to get hold of you.
16:52:40 <hemna> that's a good thing :)
16:52:46 <dulek> jungleboyj: I would be fine if that's before the release. After the release that's a bad practice.
16:53:14 <jungleboyj> erlon, That was my thought too, just curious what people though.
16:53:21 <smcginnis> jungleboyj: Yeah, I think I'd want to leave that to folks like RH and Mirantis to decide to "unmark" them after the release.
16:53:22 <dulek> jungleboyj: Distros would have different drivers supported depending on minor version of stable release they're based on.
16:54:03 <jungleboyj> dulek, Ew, that sounds messy.
16:55:14 <smcginnis> Just based on the discussion here, I would feel better if we don't do anything right now and improve our policy at the PTG.
16:55:14 <erlon> I would vote for flagging the drivers that are really bellow the bar and set tight bars in the next release
16:55:28 <erlon> but once flagged, is flagled
16:55:38 <hemna> so 30%?
16:55:40 <hemna> or less?
16:55:44 <smcginnis> erlon: I'll filter through this list and see.
16:55:48 <hemna> smcginnis,
16:55:51 <hemna> cool
16:55:55 <smcginnis> No percentage at this point, just time since last reporting success.
16:56:19 <erlon> hemna: jenkins -15%?
16:56:38 <jungleboyj> smcginnis, I think that is safest for now.
16:56:39 <hemna> erlon, that's an interesting but valid measure I'd say
16:56:42 <smcginnis> That's it, I'm kicking Jenkins out. :D
16:56:45 <hemna> :)
16:56:59 <erlon> haha
16:57:12 <jungleboyj> :-)
16:57:12 <smcginnis> Hitachi is consolidated under one CI now, right?
16:57:20 <hemna> the interesting thing is that some of the jobs in CIs are mostly good, but 1, and it causes the overall % to drop
16:57:34 <erlon> smcginnis: we have 2 accounts 1 for HNAS and another for VSP/HBSD
16:57:54 <smcginnis> erlon: OK. Is the third party wiki page up to date?
16:58:26 <erlon> smcginnis: I believe yes, have to double check
16:58:43 <smcginnis> erlon: OK, thanks.
16:58:52 <smcginnis> 2 minutes. Anything else?
16:58:55 <Swanson> smcginnis WEAK ON CI! SAD! SOGGY!
16:58:59 <pots> smcginnis remember that there's a CI that was marked unsupported inadvertently--can we fix that today?
16:59:06 <smcginnis> Swanson: :)
16:59:17 <smcginnis> pots: What?
16:59:51 <pots> the hpmsa driver inherited the dothill driver's unsupported flag, but the hpmsa CI is running fine
17:00:09 <smcginnis> pots: I thought you or someone was going to submit a patch to unflag that one?
17:00:10 <jungleboyj> Swanson, And he is our PTL again.  What have we done?
17:00:20 <smcginnis> jungleboyj: Hah!
17:00:31 <smcginnis> Times up, let's continue in #openstack-cinder.
17:00:37 <smcginnis> #endmeeting