Wednesday, 2014-09-03

*** markwash__ has quit IRC00:54
*** mestery has quit IRC01:31
*** mestery has joined #openstack-relmgr-office01:31
*** mestery_ has joined #openstack-relmgr-office01:35
*** mestery has quit IRC01:37
*** markwash__ has joined #openstack-relmgr-office02:05
*** mestery_ is now known as mestery02:09
*** markwash__ has quit IRC02:36
*** david-lyle has joined #openstack-relmgr-office02:39
*** markwash__ has joined #openstack-relmgr-office03:13
*** markwash__ has quit IRC04:15
*** markwash__ has joined #openstack-relmgr-office04:26
*** markwash__ has quit IRC04:50
*** markwash__ has joined #openstack-relmgr-office04:52
*** markwash__ has quit IRC05:18
*** david-lyle has quit IRC06:19
*** flaper87|afk is now known as flaper8706:30
*** flaper87 is now known as flaper87|afk07:26
*** flaper87|afk is now known as flaper8707:39
*** zz_johnthetubagu is now known as johnthetubaguy08:30
*** jraim_ has quit IRC11:03
*** ttx has quit IRC11:03
*** jraim__ has joined #openstack-relmgr-office11:06
*** dolphm has quit IRC11:06
*** ttx has joined #openstack-relmgr-office11:09
*** ttx has joined #openstack-relmgr-office11:09
*** dolphm has joined #openstack-relmgr-office11:11
*** markmcclain has joined #openstack-relmgr-office12:02
*** markmcclain has quit IRC12:02
*** markmcclain has joined #openstack-relmgr-office12:04
ttxSergeyLukjanov: around?12:35
ttxjohnthetubaguy: want to go through them now?12:51
johnthetubaguyttx: not really, bit distracted, maybe in an hour? or do we need to do this ASAP12:51
ttxjohnthetubaguy: no, one hour is good12:51
johnthetubaguyttx: awesome, thanks12:52
ttxas soon as we can communicate to core reviewers that they shouldn't +2/APRV anymore, the better12:52
ttxbasically the queeu should not get larger12:52
ttxdolphm: ping me when around12:54
ttxmestery: same12:54
SergeyLukjanovttx, yup, I'm here13:06
ttxSergeyLukjanov: hammer time!13:07
ttxhttps://launchpad.net/sahara/+milestone/juno-313:07
SergeyLukjanovttx, for sahara we're waiting for patches that are in gate13:07
ttxso all 4 are in-flight ?13:07
SergeyLukjanoveverything that wasn't approved yesterday already moved to future13:07
SergeyLukjanovyup13:07
ttxcould you let me know which review numbers to watch ?13:07
ttxthat way i can help re-enqueuing them13:08
SergeyLukjanovsure, let me list them13:08
ttxand I know when I can tag13:08
ttxalso should I defer all j3 bugs to rc1 ?13:09
ttxor do they have in-flight patches too ?13:10
SergeyLukjanovttx, everything that is still on j3 is in-flight13:10
SergeyLukjanovare you prefer links to reviews or just review numbers?13:11
ttxoknumbers is fine13:11
SergeyLukjanovttx, 117276 112159 110518 117501 118146 110517 109394 10898213:13
ttxhmm, having trouble to map them to BPs13:14
ttxwhich bp is /117276/ for13:15
SergeyLukjanov117276 and 117501 are work completion on cdh plugin13:16
ttx117501 needs a rebase accoring to the review page13:16
ttxso cdh-plugin is incomplete ? or?13:17
ttxalso what is 118146 about?13:17
SergeyLukjanovttx, both are final fixes on cdh plugin, w/o them it'll work but w/ them it'll work much better13:18
SergeyLukjanov117501 is a bug fix, so, it could be moved to rc113:18
SergeyLukjanov117276 is the CDH bug fix too, so, I'm ok with moving it to rc13:19
ttxif they make it, all the better... but I won't wait on them13:19
ttxthere is no review attached to swift-url-proto-cleanup-deprecations ?13:20
ttxor is it 118146?13:20
ttx118146 looks like a bugfix to me13:21
SergeyLukjanov118146 is a bug fix too13:21
SergeyLukjanovswift-url-proto-cleanup-deprecations is for the rc1, not fully completed, will ask you for FFE13:21
ttxok, we should move it now then13:21
SergeyLukjanovoh13:21
ttxdone13:21
SergeyLukjanovI've misread the bp name13:21
* ttx rollsback13:22
SergeyLukjanovit's already implemented13:23
SergeyLukjanovby  https://review.openstack.org/#/c/116119/13:23
ttxkewl13:24
ttxmark it implemented please13:24
ttxso as far as features go cluster-secgroups is waiting on 109394 110518 11051713:24
SergeyLukjanovyup, I've missed it prev time13:24
ttxceilometer-integration is waiting on 10898213:24
ttxand anti-affinity-via-server-groups is waiting on 11215913:25
ttxlet's try to get all of those in before tagging j313:25
ttxif the bugfix ones make it, goof, if not, they will be rc1 material13:25
ttxgood*13:25
ttxwe'll review progress tomorrow morning13:25
SergeyLukjanovagreed, thank you13:26
ttxSergeyLukjanov: in the meantime, tell your core reviewers to not approve random patches for the next two days, only retry the feature stuff13:26
ttxso that we stop adding more to queue13:26
SergeyLukjanovttx, ack13:26
* SergeyLukjanov checking the list one more time :)13:26
SergeyLukjanovyup, numbers are correct13:27
ttxok, starred them13:30
ttxSergeyLukjanov: thx!13:30
SergeyLukjanovttx, thank you13:30
SergeyLukjanovttx, it looks like gate very overloaded this time13:31
ttxSergeyLukjanov: your infra help wanted in keeping it under control and fluid :)13:31
SergeyLukjanovttx, if you'll have a list patches I could ninja merge them or promote ;)13:32
ttxSergeyLukjanov: let's keep that option open for Friday :)13:33
SergeyLukjanovttx, yeah13:37
ttxSergeyLukjanov: fwiw so far release priorities was never accepted as a good excuse for gate bypass.13:38
SergeyLukjanovttx, it was joke re ninja approving :)13:39
SergeyLukjanovttx, and I don't think that promoting is very bad if it's friday already :)13:39
*** dhellmann has quit IRC13:57
*** dhellmann has joined #openstack-relmgr-office13:58
mesteryttx: Around14:25
ttxmestery: ohai14:29
mesteryttx: So, looking like we may need 2 FFEs in Neutron:14:29
mesteryhttps://blueprints.launchpad.net/neutron/+spec/l3-high-availability and https://blueprints.launchpad.net/neutron/+spec/add-ipset-to-security14:30
mesterySecurity group RPC refactor is in the queue now.14:30
ttxmaybe we can move those to the rc1 milestone already14:30
mesteryOK, cool, I'll do that now.14:30
mesteryOtherwise, I have a call with rkukura to discuss https://blueprints.launchpad.net/neutron/+spec/ml2-hierarchical-port-binding and it may get an FFE as well.14:30
ttxwhat about neutron-dvr-fwaas?14:30
mesteryI need to talk to the owner of that one too14:30
mesteryThat one may need to be RC1 as well14:31
ttxif it's only 3 I think we are in good shape14:31
mesteryOtherwise, all the rest I plan to move out of Juno today14:31
mesteryCool14:31
ttxyes, please move out the ones that are not in-flight14:31
mesteryPeople may ask for this one (https://blueprints.launchpad.net/neutron/+spec/reorganize-migrations), I need to talk to salv-orlando and will get back to you on it.14:31
ttxand communicate to core reviewers that they should refrain from enqueuing random changes until the tag14:32
mesteryI'm ahread of you on that one, I sent that email to core reviewers last night already :)14:32
ttxOK, let me know when you did the cleanup, and we can map the remaining in-flight stuff to review numbers, so that I can help keep an eye on them14:33
ttxwill be easier to do once you complete the cleanup14:33
mesteryttx: Ack, give me 15-30 minutes or so14:33
ttxok, ping me when done14:33
mesteryttx: ack14:35
ttxdolphm: ping me when around14:36
dolphmttx: o/14:36
ttxready to drop the hammer ?14:36
*** david-lyle has joined #openstack-relmgr-office14:36
ttxyou have 2 left... which of those is all in-flight ?14:36
dolphmttx: everything for j3 is in flight14:37
ttxdolphm: ok, could you link them to in-flight review numbers ?14:37
ttxkeystone-to-keystone-federation14:37
dolphmttx: link where? i've got https://gist.github.com/dolph/651c6a1748f69637abd014:38
ttxjust paste review numbers here14:38
ttxok, I can work from that doc14:38
ttxdolphm: shouldn't you "reverify" them ?14:40
dolphmttx: i thought all that was replaced by just 'recheck'?14:40
ttxdolphm: hmm, maybe doublecheck14:40
ttxI'm not sure naymore14:40
ttxbecause what's sure is that it didn't reenqueue in gate14:41
dolphmttx: yeah, what i've noticed is that they seem to run a check job, and then when that succeeds, it re-requeues14:41
ttxok14:41
ttxdolphm: please communicate to keystone core reveiwers that they should hold on approving other stuff, so that we increase the chances that all get in before tag14:42
ttxdolphm: can I move all bugs to rc1 ?14:42
dolphmalrighty - on that note - we had another bp actually make all our deadlines, and it's now gating. but i never had it targeted to j3 until today14:43
dolphmhttps://blueprints.launchpad.net/keystone/+spec/multi-attribute-endpoint-grouping14:43
dolphmhttps://review.openstack.org/#/c/111949/14:43
ttxok let's add it to mix. which review is that ?14:43
ttxok14:43
dolphmttx: and i'll clean up bugs. some should be rc1, some need to be dropped14:44
ttxdolphm: ok, I'll let you do the cleanup then14:44
ttxdolphm: thx, will ping you again tomorrow, retry at will in the mean time14:45
ttxdavid-lyle: ping me when around14:45
david-lylettx: here14:45
ttxdavid-lyle: time to cut what's not approved yet14:46
ttxi see you marked some deferred14:46
ttxare all the "needs code review" in flight ?14:46
david-lyletop 3 are waiting on client releases, so they will be FFE14:47
david-lyle3 of the other 4 are in the gate14:47
david-lylehttps://blueprints.launchpad.net/horizon/+spec/horizontal-form is small and only needs one more +2, but not critical14:47
ttxdavid-lyle: those needing FFEs you can move them to RC1 milestone14:48
david-lyleI can deffer14:48
ttxwe'll consider them from there14:48
david-lyleok14:48
ttxdavid-lyle: maybe horizontal-form can be FFEd too14:48
ttxso you can move it to rc114:48
david-lyleok14:49
ttxwhich leaves us with...14:49
ttxcontext-selection14:49
ttxpending on https://review.openstack.org/113331 ?14:49
david-lylein gate toward the top14:49
david-lyleyes14:49
ttxevacuate-host14:49
david-lylealso in the gate14:49
ttxhmm, which review?14:50
david-lylehttps://review.openstack.org/#/c/75940/14:50
ttxadd-attributes-for-table-cells > https://review.openstack.org/#/c/94450/14:51
ttxdavid-lyle: can I move all bugs to rc1 ?14:51
david-lyleyes in gate14:51
david-lylettx: yes14:52
ttxok move in progress14:52
ttxi'll let you move the FFEd blueprints to rc114:52
ttxThe deferred ones I'll just remove the juno-3 target14:53
david-lyleok, FFEs moved to rc-114:54
ttxok all looking good now14:54
david-lylemuch cleaner14:55
ttxwill tag if those 3 are in. Will talk to you tomorrow if not :)14:55
david-lyleok, hoping the gate odds are in our favor14:55
ttxdavid-lyle: in the meantime, ask horizon-core to hold on random approvals, to increase the odds14:55
david-lylewill do14:55
ttxevery bit may help a little14:55
mesteryttx: Quick question. This one (https://blueprints.launchpad.net/neutron/+spec/brocade-l3-svi-service-plugin) had two +2s, but htere was confusion between submitter around CI results.14:57
mesteryttx: I think this one is a safe FFE for RC1 as well, given the above. Thoughts?14:58
ttxmestery: it's limited to the brocade plugin, right14:58
ttxso it's probably safe14:59
mesteryttx: Correct14:59
ttxfeel free to move it there14:59
mesteryttx: Yes, and the only reason I ask is given the confusion around the CI, which was working.14:59
mesteryttx: Thank you sir!14:59
ttxjohnthetubaguy: let me know when is a good time for you14:59
johnthetubaguyttx: hey, going through the list now, from the bottom, most things are waiting on gate merges in the Low priority stuff now15:00
ttxok, if you want as you go through them, just paste here the bp/associated reviews15:01
ttxsince that's what I need to track them efficiently15:01
johnthetubaguyah, OK, probably needs another pass15:02
johnthetubaguyI should be doing the xenapi thing really15:02
ttxjohnthetubaguy: once I have the mapping I can keep track of things, move them to implemented if they pass gate, and recheck them if they don't15:05
ttxjohnthetubaguy: hopefully mikal can cover the night retry shift for us15:06
johnthetubaguyttx: its loads of patches right now, but I get what you mean, we need a list15:06
johnthetubaguyI assume he will have landed by then15:06
ttxjgriffith: ping me when around15:14
mesteryttx: This one is 100 lines of code and is a good community thing for RC1: https://blueprints.launchpad.net/neutron/+spec/l3-metering-mgnt-ext15:22
mesteryIt's been around for a while, I',m inclined to target RC115:22
ttxmestery: hmm, maybe ad dit to RC1 and we can look into the pile after the tag hopefully on Friday15:23
mesteryttx: Ack15:23
*** johnthetubaguy is now known as zz_johnthetubagu15:40
ttxmestery: your 15-30minutes are up15:41
ttx69 actually15:41
mesteryttx: Please haev a look now15:41
mestery4 left15:41
mesteryworking on getting those out now15:41
mesteryOtherwise it's in good shape right?15:41
*** zz_johnthetubagu is now known as johnthetubaguy15:41
ttxOK, iuf you have the BP/review map for those 4 i'll take it15:42
mesteryLet me finish, I'm almost done :)15:42
SergeyLukjanovttx, looks like last sahara patch needs really simple rebase, is it ok or it's better to ask FFE?15:42
ttxso that I can add them to my starred list :)15:42
SergeyLukjanovttx, 11215915:42
ttxmestery: in the mean time, any bug in the j3 list you would like to keep in the target ?15:42
ttxor can I fire the script to move them all to rc1 ?15:43
mesteryttx: Please remove them all15:43
ttxSergeyLukjanov: try the rebase and approve. If it doesn't make it we can FFE it15:44
SergeyLukjanovttx, ack15:45
ttxmestery: all moved15:51
ttxjohnthetubaguy: let me know when you want to go through the in-flight list15:52
mesteryttx: thanks sir!15:52
johnthetubaguyttx: yeah, we should do that15:52
ttxok, before I leave for a dinner break15:52
ttxjohnthetubaguy: those last 11 are all in-flight at this point?15:53
ttxjohnthetubaguy: or cleanup is still in progress?15:53
mesteryttx: Neutron Juno-3 LP page is good now, the 3 in "Nees review" are all in the queue15:53
ttxmestery: could you give me the list of reviews each one is pending on ? I can update status for you tomorrow monring that way15:54
ttxand also call for rechecks15:55
mesteryttx: Getting them15:55
mesteryttx: https://review.openstack.org/#/c/116188/ and https://review.openstack.org/#/c/113749/ and https://review.openstack.org/#/c/113653/15:56
mesteryttx: They are all low, so if they fail, don't hold the tag or recheck15:56
johnthetubaguyttx: its mostly all in the gate, API ones need clean up15:56
mesteryttx: But this one (https://review.openstack.org/#/c/118456/) needs to go in, it's a revert15:56
ttxmestery: ok, will tag if that one goes in, and the others look too far away15:57
mesteryThanks!15:57
ttx  else: will ping you tomorrow.15:57
ttxjohnthetubaguy: do you think we can enumerate the reviews or it's a futile exercise and we should just talk again tomorrow ?15:58
ttxGiven that we share TZ it may just make more sense15:59
ttxit's not as if there was a list of 5 we were blocking on16:00
ttxSlickNik, stevebaker: i'll be back at 18:30 UTC to work on your j-3 status16:01
ttxjohnthetubaguy: so... talk to you tomorrow ?16:04
ttxI'll take that as a yes and have a break now.16:06
*** johnthetubaguy has quit IRC16:08
*** johnthetubaguy has joined #openstack-relmgr-office16:09
SlickNikttx: okay sounds good. Trove's still got a few reviews in flight.16:13
*** markwash__ has joined #openstack-relmgr-office16:20
*** markwash__ has quit IRC16:24
*** markwash__ has joined #openstack-relmgr-office16:38
*** arnaud has joined #openstack-relmgr-office16:45
*** johnthetubaguy is now known as zz_johnthetubagu16:58
*** morganfainberg is now known as morganfainberg_Z17:53
*** arnaud has quit IRC17:53
*** arnaud has joined #openstack-relmgr-office18:27
ttxSlickNik: I'm around now18:39
SlickNikttx18:40
SlickNikhere18:40
ttxyou have 4 bps still in the list18:40
ttxdo they all have in-flight reviews ?18:40
SlickNikttx, All of them are in flight18:40
SlickNikOne of them actually just merged. I need to update the page.18:41
* SlickNik does that now18:41
ttxOK, for the others I'll need to know which review they hinge on18:41
ttxso that I can track them18:41
SlickNikroger that. getting them to you now.18:42
SlickNikhttps://blueprints.launchpad.net/trove/+spec/update-instance-name is https://review.openstack.org/#/c/92701/18:42
ttxthe deferred one, is it deferred to kilo or are you planning on getting a FFE for it ?18:42
SlickNikhttps://blueprints.launchpad.net/trove/+spec/postgresql-support is https://review.openstack.org/#/c/57609/18:43
SlickNikdeferred is deferred to kilo18:43
ttxok I'll just remove the milestone target then18:43
SlickNikI moved the one that we need a FFE for to RC118:43
SlickNikttx, okay. Thanks!18:44
ttxshall I move the two bugs to the RC1 milestone too ?18:44
SlickNikAnd https://blueprints.launchpad.net/trove/+spec/configuration-parameters-in-db is https://review.openstack.org/#/c/79850/18:44
SlickNikThe two bugfixes are in the gate queue — it's likely they'll merge before the bps, so I'd leave them in.18:46
ttxok18:46
SlickNikIf they fail to merge for whatever reason, I'll move them to rc1 and not recheck to keep the gate clean.18:46
ttxok18:46
ttxSlickNik: ok you're all set18:47
SlickNikttx: Thanks much for your help!18:47
ttxplease tell trove-core to refrain from pushing random changes to the gate18:47
ttxand I'll tag if we get to the bottom of your targeted changes18:48
ttxotherwise I'll talk to you tomorrow again18:48
SlickNikYup, they're aware of that. I mentioned it to them this morning.18:48
ttxSlickNik: cool, thx!18:48
SlickNikSounds good. Catch you later.18:49
ttxmarkwash__: ping me when around18:49
ttxstevebaker: ping me when around18:49
markwash__ttx: can haz 10 minutes?18:49
ttxsure18:49
stevebakerttx, ping18:55
ttxstevebaker: hi!18:55
ttxhttps://launchpad.net/heat/+milestone/juno-318:55
ttxthere are 7 blueprints left18:56
ttxtime to reduce it to what's in-flight18:56
ttxcould you do a pass on them and only keep what's already enqueued in the forever gate ?18:56
stevebakeryep, starting to trawl through them now18:56
ttxfor the remaining ones we can split between FFE candidates and Kilo deferrals18:56
ttxok, let me know when you're done with that. I'll be around for the next 2 hours18:57
ttxstevebaker: I can move all the bugs to the RC1 target if you want me to18:58
* ttx has got a script for that now18:58
stevebakerttx, yeah, that would be good18:58
ttxonce we are done with the targeting, ideally you should tell heat-core to refrain frmo piling up new changes onto the gate, to increase chances of targeted changes of making it18:59
stevebakerok. so the bps which are left are in-flight19:04
ttxok, which review are they hinging on ? That way I can help be reverifying them tomorrow mornig in case they are stuck19:05
ttxhttps://review.openstack.org/#/q/topic:bp/deployment-multiple-servers,n,z  ?19:06
ttxand https://review.openstack.org/#/q/topic:bp/implement-ec2eip-updatable,n,z ?19:06
ttxor more?19:06
ttxstevebaker: on the latter https://review.openstack.org/118562 is part of it and not approved yet?19:07
stevebakerI'm reviewing that now, I'm inclined to just approve it unless you stop me. The changes before are still in-flight19:08
ttxgo ahead19:09
ttxstevebaker: ok, you're all set. I'll tag of all those merge. Otherwise I'll just talk to you tomorrow so that we make more decisions19:10
stevebakerttx, ok. thanks. I'll ping you if I notice they merge19:11
ttxmarkwash__: you haz 23min19:12
markwash__ttx: ohai19:12
markwash__if only I had spent my 23 minutes for good instead of evil19:13
ttxhmm, so 4 blueprint still up19:13
ttxmarkwash__: we need to only keep what's approved and inflight at this point19:13
ttxif we are to tag before the end of time19:14
ttxso which one, if any, is fully approved and lost in gate hell ?19:14
ttxfor those which aren't, which ones are wirth considering a FFE for, and which ones should just be deferred to kilo ?19:15
markwash__ttx: one moment19:15
markwash__ttx, so I think some are actually "Implemented" at this point but the release page is not up to date19:18
ttxshocking!19:18
markwash__specifically, "Restrict users from downloading..." is Implemented19:18
* ttx hits reload frantically19:18
ttxmarkmcclain: please update to reflect reality19:19
ttxare damn marks19:19
ttxmarkwash__: ^19:19
markwash__ttx there is one change almost in the gate left for metadata-schema-catalog19:21
markwash__at least for the "openstack/glance" project proper19:21
ttxmarkwash__: if you approve it in the next minute, I'll close my eyes19:21
markwash__(there is also a change to the client that is relevant for horizon but I'm not sure how that is shaking out at this point)19:21
ttxyou can mark the BP implemented if the glance part is covered19:22
markwash__ttx I think it is already approved and "in" the gate actually19:22
ttxmarkwash__: haha19:22
markwash__at 54 patchsets its hard to read gerrit :-)19:22
ttxwhat about the two others?19:23
markwash__ttx we need to kilo-defer "gpfs-image-store"19:23
markwash__should I just adjust the milestone target?19:24
ttxyes, just remove juno-319:24
markwash__done with that bp then19:25
markwash__and async processing. . .19:25
ttxthat leaves us with async processing19:25
ttxit looks a bit far away for a FFE?19:25
ttxI mean, if you think it can make it soon, I'm fine with it... but...19:26
markwash__ttx I'm not seeing it landing19:29
markwash__this bp makes me sad19:29
markwash__its been so long!19:29
markwash__:-(19:29
markwash__anyway19:29
ttxok, you can remove the j3 target on it19:29
ttxlast question.. which review(s) is metadata def catalog waiting on ?19:30
ttxI can help bebysit it tomorrow morning if needed19:30
markwash__ttx: actually, sorry, I'm out of date on async, I think its to be marked as implemented as well19:30
markwash__one sec, let me try to verify again19:30
* ttx backs off slowly19:30
markwash__haha19:31
* markwash__ is a bit of a rollercoaster today19:31
ttxok, now I just need the reveiw number(s) for the last one19:31
markwash__ttx, okay i think https://review.openstack.org/#/c/44355/ "implemented" the async bp19:32
markwash__I suppose as needed we can file additional bps if folks were intending more features around that epic19:32
ttxno, I mean, the metadata def catalog one19:32
markwash__right, sorry, one sec19:32
ttxwhich review is it waiting on ?19:32
markwash__ttx https://review.openstack.org/#/c/111483/19:33
ttxmarkwash__: ok, will tag when that one lands, otherwise will talk to you same time tomorrow :)19:35
ttxIn the mean time tell glance-core to behave and not enqueue loads of non-targeted things19:35
markwash__ttx thanks for your help and guidance19:35
ttxnp19:36
* ttx disappears for the evening19:37
* markwash__ crawls back under a rock19:37
*** markmcclain1 has joined #openstack-relmgr-office20:53
*** markmcclain has quit IRC20:54
*** markmcclain1 has quit IRC21:01
*** markmcclain has joined #openstack-relmgr-office21:01
*** zz_johnthetubagu has quit IRC21:10
*** zz_johnthetubagu has joined #openstack-relmgr-office21:11
*** zz_johnthetubagu is now known as johnthetubaguy21:11
*** david-lyle has quit IRC21:12
*** dhellmann is now known as dhellmann_21:28
*** david-lyle has joined #openstack-relmgr-office21:33
*** markwash__ has quit IRC22:01
*** openstack has joined #openstack-relmgr-office22:10
*** markwash__ has joined #openstack-relmgr-office22:12
*** markmcclain has quit IRC22:20
markwash__ttx: actually https://review.openstack.org/#/c/44355/ needs babysitting through the queue as well22:27
markwash__I was mistaken previously22:27
*** arnaud has quit IRC22:51
*** david-lyle has quit IRC23:14
*** openstackstatus has quit IRC23:19
*** openstackstatus has joined #openstack-relmgr-office23:20
*** ChanServ sets mode: +v openstackstatus23:20

Generated by irclog2html.py 2.14.0 by Marius Gedminas - find it at mg.pov.lt!