13:59:48 #startmeeting Glance 13:59:48 Meeting started Thu Nov 12 13:59:48 2015 UTC and is due to finish in 60 minutes. The chair is flaper87. Information about MeetBot at http://wiki.debian.org/MeetBot. 13:59:49 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 13:59:51 The meeting name has been set to 'glance' 13:59:55 o/ 13:59:57 o/ 14:00:09 #topic Roll Call 14:00:10 o/ 14:00:23 so, who's around ? 14:00:25 :D 14:00:27 pppppplllll 14:00:30 :D 14:00:34 * flaper87 has a terrible connection today 14:00:35 o/ 14:00:38 #topic Agenda 14:00:43 #link https://etherpad.openstack.org/p/glance-team-meeting-agenda 14:00:48 o/ 14:00:54 That's our agenda for today 14:01:17 o/ 14:01:35 not much to say today more than updates 14:01:43 #topic Updates from summit http://lists.openstack.org/pipermail/openstack-dev/2015-November/078235.html (flaper87) 14:02:04 So, that's the email with the summary from the summit 14:02:15 Hope you all read it and you all love it 14:02:18 :P 14:02:24 jokes apart, are there questions from that email? 14:02:40 o/ 14:02:44 I'd like to take some time to answer questions and doubts from the summit 14:02:46 nikhil_k: yoooooooooooooooooo 14:02:54 Even for people that attended 14:03:16 During summits we discuss many things but nothing is written on stone 14:03:21 flaper87: it was briefly and clearly :) 14:03:40 I'd like that to be clear to everyone and make sure ppl know that feedback is always welcome 14:03:45 flaper87, I need to clarify one question :) 14:03:54 it is related to priorities 14:03:57 kairat: shoot 14:04:05 I've priorities in the topics for today too 14:04:10 Ok 14:04:20 let's talk about it later) 14:04:21 :D 14:04:41 ok, any other questions I can delay answers for ? 14:04:45 :D 14:05:07 I'll take that as a no 14:05:18 #topic Priorities for Mitaka http://specs.openstack.org/openstack/glance-specs/priorities/mitaka-priorities.html (flaper87) 14:05:24 kairat: gimme 1s 14:05:37 flaper87, ok 14:05:59 That's the list of priorities. That's what we should focus our review strengths on. However, that doesn't mean we won't review other patches 14:06:28 The priorities list helps reviews to know what to focus on when in doubt and to communicate to the community what the team wants to achieve during the cycle 14:06:34 o/ 14:06:40 there are things that have a clear plan forward (or focus) 14:07:00 and there are others that will still happen (hopefully) but are less "critical" 14:07:22 Hope that it's clear that the priorities list *doesn't* mean other things won't be reviewed 14:07:30 that's it from me 14:07:36 kairat: shoot 14:07:52 sounds good. so, this is a immutable list, correct? 14:08:03 So I got this: https://review.openstack.org/#/c/233687/ 14:08:16 nikhil_k: yup 14:08:20 and i was interesting if priorities is like restrictions 14:08:24 * nikhil_k has a small KB so typing very slow 14:08:32 or like things that we should be focused on 14:09:07 kairat: to some extent it also works as a way to know what we would like to have in the cycle or not 14:09:28 if there are 10 specs impacting the API, we might need to choose which ones we'll let in 14:09:31 I understand the reasoning but it is a correct to have -2 on all bp that is not included in priorities 14:09:50 kairat: looking that change ... I don't think it was priority reason it got -2'd 14:09:51 * nikhil_k still absorbing new process & waiting for things to fan out before giving feedback (about feedback req before) 14:10:02 kairat: no, in fact, there are other specs there that don't have -2's 14:10:08 kairat: I think that -2 is not related to priorities 14:10:18 and yeah, that 14:10:22 what mfedosin and Jokke_ said 14:10:36 we just postponed your feature to N 14:11:00 because there are a lot of changes in Mitaka 14:11:15 * flaper87 is scared about the import process work 14:11:25 that's gonna be huge and it'll require lots of time 14:11:34 he's not alone 14:11:35 Ok, so no features except priorities? 14:12:02 and bugs of course 14:12:12 kairat: no, that's not what we are saying 14:12:17 please, take a look at the list of specs 14:12:19 kairat: let's work on case by case basis, I think the break would be mostly on api changes 14:12:23 there *are* new features there 14:12:32 we simply can't afford them all 14:12:37 nikhil_k: ++ 14:13:12 flaper87, ok, is it mentioned in your spec? 14:13:15 * flaper87 has 10s lag 14:13:25 kairat: it is, AFAIK 14:13:44 flaper87, ok, need to review it again, thanks 14:14:08 There's also a reason why we call it priorities list and not "exclusive list of things we'll accept" 14:14:20 The process is new and these questions are awesome 14:14:25 kairat: you potentially have 30 mins to move fwd for your spec at drivers' mtg 14:14:29 kairat: the point is that due to the fact that we're doing major rework around our core functionalities, we do not want to have multiple things parallel poking those, so if we break something we have decent idea what actually broke it 14:14:35 dedicated mins 14:14:36 lets clarify them so we have a clearer process 14:15:04 kairat: if it's not clear enough on the priorities spec, let me know and I'll happily amend it 14:15:12 * nikhil_k still doesn't know the spec :P 14:15:20 Thanks guys=) 14:15:42 nikhil_k: http://specs.openstack.org/openstack/glance-specs/priorities/mitaka-priorities.html 14:15:46 #link http://specs.openstack.org/openstack/glance-specs/priorities/mitaka-priorities.html 14:15:57 ok, moving on (unless there are other questions) 14:16:06 flaper87: thanks. I was curious about kairat's one. 14:16:27 nikhil_k: ah, sorry. misunderstood 14:16:32 * flaper87 (facepalm) 14:16:38 #topic Glance v2 additional filtering https://blueprints.launchpad.net/glance/+spec/v2-additional-filtering https://review.openstack.org/#/c/197388/ (slow progress) <- required for Nova v2 adoption (mfedosin) 14:16:45 that's a long topic name 14:16:47 nikhil_k, https://review.openstack.org/#/c/233687/ 14:16:50 :D 14:16:50 oh yes 14:16:52 mfedosin: floor is yours 14:16:52 https://review.openstack.org/#/c/233687/ 14:16:58 nikhil_k: ^ 14:17:00 I tried to explain the issue 14:17:31 so, yeah - we have this spec merged in Liberty 14:17:49 but the progress is slow :( 14:18:11 mfedosin: FWIW, I have re-propossed it for Mitaka 14:18:16 and we really need this feature to port Nova to v2 14:18:17 but we need commitment from someone 14:18:34 * flaper87 can't find the link 14:18:47 #link https://review.openstack.org/#/c/230971/ 14:18:51 that one 14:19:02 What exactly do you need? 14:19:05 so I'm okay to take it on and start writing the code 14:19:17 mind expanding a bit so we're all on the same page 14:19:17 I need this feature asap 14:19:38 because it blocks v2 image-list for Nova 14:19:43 mfedosin: is that the piece we discussed with Jay in the Tokyo? 14:19:52 Jokke_: yes 14:20:19 not exactly - I wanted to implement it in the client 14:20:23 but having it on the server side is perfect 14:20:35 thanks kairat, abhishekk, mfedosin 14:20:41 well if we get it working ;P 14:20:42 mfedosin: I don't think ppl here know exactly what you need 14:20:48 Is anything in particular blocking this? Or is it just reviews? 14:20:50 I do :P 14:20:57 but please, explain to others 14:21:08 mclaren: I think we need to get the spec in again and the code up 14:21:10 :D 14:21:25 ok, so paperwork... 14:21:34 mclaren: right :D 14:21:50 wait, this is changes-since correct? 14:21:53 mclaren, it doesn;t pass tests 14:22:03 I wish Steve to be here to talk about it 14:22:07 nikhil_k: yes 14:22:08 and progress on this feature is quite slow 14:22:21 nikhil_k: yes, it's kindof changed since 14:22:21 nikhil_k: yup 14:22:21 Ok, so Steve is blocking us :-) 14:22:22 sorta 14:22:41 I see, thanks all. 14:22:41 so we need some way to filter our output to simulate changed-since so we can keep nova API unbroken 14:22:52 I'll ping Steve and get his feedback and ask him if he's fine with us taking this over 14:22:55 so, if he doesn't mind we can implement it 14:23:09 me and kairat 14:23:17 * flaper87 is not sure if his messages are reaching destination 14:23:36 flaper87: please do :) 14:23:44 :D 14:23:47 I can ping him as well 14:23:51 ok ok 14:23:55 Lets get this going 14:24:01 hey I'll ping him too! 14:24:03 mfedosin: then don't wait for me 14:24:05 heh 14:24:10 mfedosin: when you do, please, update the spec 14:24:13 :) 14:24:17 let's ping him all :D 14:24:30 it needs a nick of the person who's going to work on this 14:24:44 flaper87: yep, I will update it 14:24:49 just send am email 14:24:54 after that, we can merge that spec. I'll re-read it to make sure it doesn't have weird impacts on the rest of the work 14:24:57 a* 14:25:05 email + ping + sms + telegram 14:25:08 done 14:25:12 :D 14:25:16 forgot pager 14:25:18 :P 14:25:21 damnit 14:25:24 :( 14:25:25 :D 14:25:29 mfedosin: anything else ? 14:25:42 nope sir 14:25:52 sweet, thanks for working on that 14:25:55 #topic Glance upgrades (flaper87) 14:26:04 flaper87: did you send a fax to the office? 14:26:34 I don't really have much to say here and it's perhaps an open question. How do we feel about the upgrade process in Glance? What are we missing? 14:27:02 which upgrades? API, DB, other (service) imports? 14:27:02 Documentation? 14:27:14 I think we haven't revisited this topic in a bit and, while we have migrations in place, I think it'd be great to check if there's something we need to do to improve it 14:27:25 Everything Juno -> Kilo -> Liberty -> Mitaka 14:27:34 It's a wide open question 14:27:48 yeah, good question 14:27:56 has anyone tried it ? 14:28:12 There's a lot of work on communicating what the services' upgrade story is 14:28:25 Whether they support upgrades AND whether they support rolling upgrades 14:28:43 nikhil_k: Not since a few releases ago unfortunately... 14:28:53 I've been meaning to take some time to test the above but I wanted to ask if ppl have given that a try 14:29:06 thanks mclaren 14:29:07 ok, I guess we need to clear that story a bit 14:29:23 * flaper87 wonders if rosmaita has done upgrades 14:29:30 I will put this in my TODO list for the first few items when fully back. 14:29:35 I doubt it 14:29:42 I'll start an etherpad to collect thoughts and issues about this 14:29:51 nikhil_k: ah, nice 14:29:52 thanks 14:29:55 flaper87: do you have time for explaining rolling upgrade expectations? 14:29:58 * flaper87 removes that from his todo list 14:29:58 :P 14:30:10 Sure 14:30:42 tl;dr: The expectation is that you can upgrade 1 glance-api node at a time to avoid having down times 14:30:46 i have tried nova's online schema migration from juno >> kilo 14:30:58 but they have reverted it now 14:31:13 sometimes that's complicated when there are schema migrations, hence versionobjects 14:31:33 flaper87: I don't think the API side is our problem 14:31:47 gotcha, I am recollecting a bit on vo now. 14:31:59 flaper87: problem is when we do the DB migration and need to roll the registries 14:32:04 If we have a problem in the db side, I believe it'll affect the API as well 14:32:09 :) 14:32:20 flaper87: I thought the same 14:32:44 * nikhil_k is unsure if we added virtual size in juno or not 14:33:13 if we have an old glance-api node running and we migrate the db under-the-hood, the glance-api node could break 14:33:15 and that will cause a downtime 14:33:15 nikhil_k: thanks for taking this 14:33:28 nikhil_k: I think it was Icehouse 14:33:31 flaper87: our API <--> Registry communications have been really stable ... we really haven't introduced too huge expectations from the DB (on old functionality) 14:33:53 the same applies to the registry node 14:33:57 you need to upgrade those 14:34:07 and there are environments running without the registry service 14:34:13 pure glance-api envs 14:34:20 flaper87: what I mean is, we most of the time survive just well if we upgrade API nodes and don't expect new features to work before reg/db has been upgraded 14:34:34 Also, can do do guarantees on sub-set of the API. say only CRUD on image+properties? 14:35:02 Jokke_: I believe that's not a good expectation and surviving that is pure luck 14:35:05 :D 14:35:07 We had a few API changes to metadef APIs but that's admin only and not sure of the operator expectations on those 14:35:09 but as you said other way around is the pain and we need to roll our registries at the point we run db migrations 14:35:18 we need to document what order to upgrade things in (if we haven't done so) 14:35:30 mclaren: that was part of my question :D 14:35:50 huh 14:35:53 I don't think we've ever talked about this explicitly and we've just been happy with db migrations 14:36:07 which are great but not enough to make transitions smooth 14:36:19 it's interesting that a IT would want to upgrade API before DB 14:36:24 IT team 14:36:36 ok, it seems we need clarify our story here and, as mclaren said, document it 14:36:42 nikhil_k: indeed 14:36:58 nikhil_k: looking forward to your findings 14:37:03 my 2 cents is that things will probably mostly work ok, but if we really start looking we could find potential issues. Eg I think the way the v2 registry error handling is done assumes the same code is on the API and registry nodes 14:37:22 mclaren: ++ 14:37:45 especially given the possibility of DB sync going corrupt w/ restarts, extraneously long DB upgrades etc. 14:37:55 I think it's time for us to look for those issues and improve our story there. It'd be great to at least identify them and work on a long term plan to fix them 14:37:56 and it will get trickier when you are running API only implementation 14:38:19 sounds like a mission for the next release :-) 14:38:31 ++ 14:38:36 not saying this has to be all fixed in Mitaka - I mean, that'd be awesome - but definitely something for N 14:38:39 mclaren: out of my MIND! 14:38:40 >.> 14:38:48 oh, there was a question on the upgrades for libraries breaking stuff 14:39:06 Identifying the issues now will help us build a plan for N and O 14:39:16 ++ 14:39:20 ok 14:39:28 moving on, unless there are more questions 14:39:33 NO plans :) 14:39:41 but I will take this item for mitaka 14:39:47 if ok 14:40:16 nikhil_k: absolutely not, you must not try to upgrade ;) 14:40:18 * nikhil_k ties with co-qa liaison responsibilities 14:40:19 I'd like to have time for the reviews list 14:40:19 nikhil_k: we can discuss this further on -glance when you're back 14:40:19 :D 14:40:20 #topic Bug / Review Triage Day  (flaper87) 14:40:42 real quick: I'm thinking of organizing a Bug/Review triage/squash day. I know some folks are still out or not fully back on brain power 14:40:42 nikhil_k: +1 14:40:56 so, I'm not going to propose it for this week, or next week. 14:41:02 What about the week after next week ? 14:41:08 big + 1 on this 14:41:11 sure thing 14:41:23 I'm in 14:41:24 We're getting closer to M-1 and I'd like to have 1 of these days on every milestone 14:41:26 works for me, thanks! 14:41:32 AWESOME! 14:41:34 It would be useful for other to help glance fixin g the updated bugs 14:41:48 sweet 14:41:52 * flaper87 dances 14:42:00 flaper87: So Mon 23rd it is? 14:42:03 I'll send an email out with a proposed day 14:42:05 in the US, we have Thanksgiving week then 14:42:14 Jokke_: yeah, that week 14:42:21 bpoulos: oh uh, you US ppl 14:42:22 ohai bpoulos, you're lurking :) 14:42:23 flaper87: not that week, Monday 14:42:24 >.> 14:42:32 bpoulos: do ppl take the whole week of ? 14:42:38 exactly for the reason bpoulos brought up :P 14:42:41 can't believe he missed thanksgiving week 14:42:47 everyone takes Thursday and Friday off, but some people take off the whole week 14:42:57 it depends on how far away they live from family usually 14:42:59 well, Mon 23rd works me 14:43:11 I'll send 2 dates, one for next week and one for the week after next week 14:43:18 we can vote on the m-l 14:43:21 ++ 14:43:24 bpoulos: thanks for brining that up 14:43:27 bringing, even 14:43:32 flaper87: np 14:43:37 ok, moving on 14:43:44 #topic Reviews / Bugs 14:43:50 glance_store broken ceph gate -> https://review.openstack.org/#/c/243706/ 14:43:54 not sure who put that there 14:44:05 (or any of those) 14:44:06 I did 14:44:08 mfedosin: was that you? 14:44:20 yes 14:44:33 mfedosin: so is glance_store breaking ceph or other way around? 14:44:41 anything specific you want to say? or just raise awareness ? 14:44:43 we have it broken and we can't merge commits to glance_store 14:45:00 btw, I'll do a triage on the reviews today/tomorrow to refresh the dashboard 14:45:01 just raise awareness 14:45:02 why don't we keep these optional 14:45:02 :D 14:45:19 because I have no idea how to fix it 14:45:20 (Sorry that was a actual question) 14:45:25 nikhil_k, there is a lot of installations with ceph+glance 14:45:26 mfedosin: ok, and have you checked is it only master or are all branches same way broken? 14:45:30 AFAIK 14:45:34 and it blocks our trust work for swift driver 14:45:41 yeah, but not all drivers should have to wait on the gate 14:45:43 Multitenant swift driver doesn't work? https://bugs.launchpad.net/swift/+bug/1511025 14:45:43 Launchpad bug 1511025 in OpenStack Object Storage (swift) "Image download with multi tenant true fails" [Undecided,New] 14:45:43 is there a bug in for the broken gate? 14:45:49 nikhil_k: ++ 14:45:59 we need to fix our functional tests for glance_store 14:46:10 kragniz: was working on that but he doesn't like us anymore (joke) 14:46:12 I think the core reviewers would be wise enough to notice the failure and stop the ceph patches in this case 14:46:18 functional test are ok 14:46:42 tempest is broken :) 14:46:49 Multitenant swift driver doesn't work? https://bugs.launchpad.net/swift/+bug/1511025 14:46:49 Launchpad bug 1511025 in OpenStack Object Storage (swift) "Image download with multi tenant true fails" [Undecided,New] 14:46:52 ops 14:47:06 Trusts for Glance are ready btw :) welcome to review https://review.openstack.org/#/c/229878/ 14:47:06 that's the work for trusts 14:47:09 and yeah - we can't make multitenant driver work 14:47:12 (please, note the spec hasn't landed) 14:47:29 feel free to review but abstain from approving until the spec lands 14:47:42 and it seems like bunting can't neither 14:47:49 looks like a bunch of auth failures there 14:47:51 multitenant broken? :-( we really need all our stores tested in the gate 14:47:54 bunting: Sorry? 14:48:15 mfedosin: Sorry? 14:48:22 bunting: https://bugs.launchpad.net/swift/+bug/1511025 14:48:22 Launchpad bug 1511025 in OpenStack Object Storage (swift) "Image download with multi tenant true fails" [Undecided,New] 14:48:22 bunting, you found a bug 14:48:27 bunting: is it your bug https://bugs.launchpad.net/swift/+bug/1511025 ? 14:48:38 mclaren: ++ 14:48:57 I wish someone would take what kragniz started 14:49:04 Ah right :) 14:49:14 #link http://logs.openstack.org/06/243706/1/check/gate-tempest-dsvm-full-ceph-src-glance_store/1420d1b/console.html#_2015-11-12_12_45_56_220 14:49:41 mclaren: can you fix bug/1511025 ? 14:49:44 flaper87: bunting is the new kragniz ;-) 14:50:05 heh 14:50:07 mfedosin: magically? :-) 14:50:08 I think we should move this to non-voting to avoid last minute screams for stuff like security patches blocked on unrelated gate 14:50:15 mclaren: w00000000h0000000000000000000000000 14:50:28 bunting: well, sir. You found yourself a new task 14:50:30 :D 14:50:38 flaper87: Whooooo ): 14:50:39 :) 14:50:46 :D 14:50:53 #topic Open Discussion 14:51:00 mfedosin: myself and bunting can hopefully take a look, I definitely want that fixed... 14:51:12 mclaren: +1 14:51:16 anything folks want to bring up or talk about? 14:51:22 o/ 14:51:23 mclaren: ++ 14:51:28 after that we can start working on trusts for MT driver 14:51:31 I'd like to bring something up about the image signature verification 14:51:33 mclaren: same here but I don't think I'll have time to make it happen other than providing reviews 14:51:43 bpoulos: shoot 14:51:52 at the summit, we decided to leave the checksum as-is, and then add a second, configurable hash 14:51:58 currently, the signature is of the checksum, which is MD5, which is insecure 14:52:03 and discussing this feature with Nova, they are completely opposed to ever supporting anything with MD5 14:52:08 they want to sign the image data directly, rather than signing a hash of the image data 14:52:14 would the glance community be opposed to doing the signature verification where the checksum is computed? 14:52:21 this would only occur if the signature properties are present 14:52:40 initially, there was opposition to a second hash being done 14:52:52 but now it seems that as long as the hash is optional, the community is ok with it 14:53:00 based on the discussion about the configurable hash at the summit 14:53:29 bpoulos: that kind of makes sense ... _but_ how big performance impact using more complex algo's there would cause? 14:53:43 mmh 14:53:46 it's just doing a hash such as SHA-256 or SHA-512 14:53:58 it would be the same as computing a separate configurable hash 14:54:09 gotcha 14:54:14 bpoulos: that would mean, we needing a published protected image property? 14:54:14 and it would only be what the user requested for the signature 14:54:28 we could use the existing signature metadata properties 14:54:30 without issue 14:54:41 we already define a signature hash method 14:54:45 makes sense to me 14:54:49 bpoulos: has this been brought up on the m-l ? I still have some backlog there 14:54:59 where did the nova discussion happen? 14:55:00 flaper87: not yet 14:55:02 no, the discussion has been on the nova spec 14:55:06 let me grab the link 14:55:14 any plans for adding new cores to team?, all other teams are expanding 14:55:24 tekentaro: there are plans 14:55:30 https://review.openstack.org/#/c/188874/ 14:55:44 however, the fact that other teams are expanding, it doesn't mean we should (hope this doesn't come out harsh) 14:55:56 flaper87: ++ 14:55:59 flaper87: ok 14:56:20 peer pressure might be difficult to resist 14:56:51 * flaper87 resists peer preasure very well unless there's alcohol involved 14:56:53 * flaper87 stfu 14:56:55 bpoulos: do we need to have published protected image property? 14:57:05 no, i don't believe so 14:57:21 bpoulos: it makes sense to me as well, fwiw 14:57:25 how do we ensure the consistency/existence of it then? 14:57:38 we check for the optional properties 14:57:40 jokke_: i understood, I just remembered we have reduced 2-3 members month before 14:57:42 just like we're doing now 14:57:43 and I trust Daniel's opinions 14:57:44 bpoulos: wonder if we could bring this up on the m-l ? 14:58:00 flaper87: sure, if that's what you'd prefer 14:58:01 bpoulos: yeah, but I think we need to have it protected. 14:58:10 nikhil_k: why? 14:58:12 and now I think we are different page 14:58:24 don't mean to interrupt, but anyone interested in the image import refactor, please look at the spec and leave comments: https://review.openstack.org/#/c/232371/ ... so far only flaper87 and mclaren have commented (which is good, they are high-quality comments, but now is the time to get your opinion known) 14:58:42 tekentaro: we really didn't ... those people reduced themselves long time ago. We just did the paperwork 14:58:45 rosmaita: got a few mins to chat after the meeting? 14:58:48 bpoulos: sorry, in my dictionary prot prop != base prop 14:59:15 not saying we should ask nova to accept it but rather discussing how we can do it in glance 14:59:16 and get feedback from nova and other folks 14:59:16 bpoulos: your work impacts several services and it's super important for the community 14:59:16 * flaper87 senses the lag slowing down his messages 14:59:18 bpoulos: so, we will have it optional but by default restricted to teh user and documented so that it will be used for signing 14:59:25 mclaren: got searchlight meeting, how about 11:00 utc in openstack-glance ? 14:59:45 i'll bring it up on the m-l so we can discuss further there 14:59:59 ok, we're running out of time 14:59:59 rosmaita: ++ 14:59:59 rosmaita: ++ 15:00:01 rosmaita: 11:00? 15:00:05 rosmaita: ++ 15:00:05 one more time 15:00:08 rosmaita: ++ 15:00:08 :D 15:00:13 ok, out of time 15:00:16 thanks ppl 15:00:21 thanks! 15:00:22 thanks 15:00:23 rosmaita: I don't want to leave comments and disappear for a few days! 15:00:31 that is quite likely 15:00:41 thanks folks 15:00:42 thanks 15:00:49 nikhil_k: that's ok, i will ignore your comments if i don't like them :) 15:00:51 thank you! 15:00:59 thanks! 15:01:09 rosmaita: sure! I have a treat for you in that case :) 15:01:42 * nikhil_k done 15:02:09 #chairs 15:02:11 #chair 15:02:20 not sure if flaper87 dropped off 15:02:51 wth, let's try 15:02:52 #endmeeting