20:00:46 #startmeeting trove 20:00:47 Meeting started Wed Oct 9 20:00:46 2013 UTC and is due to finish in 60 minutes. The chair is hub_cap. Information about MeetBot at http://wiki.debian.org/MeetBot. 20:00:47 Oh, I think it's time 20:00:48 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 20:00:50 The meeting name has been set to 'trove' 20:00:59 o^/ 20:01:00 #link https://wiki.openstack.org/wiki/Meetings/TroveMeeting 20:01:02 \\o// 20:01:07 o/ 20:01:07 present 20:01:08 o/ 20:01:08 Hi all 20:01:09 here 20:01:13 o/ 20:01:14 Hello all 20:01:18 howdy 20:01:19 \\\o/// 20:01:21 oi 20:01:26 o/ 20:01:44 #link http://eavesdrop.openstack.org/meetings/trove/2013/trove.2013-10-02-20.00.html 20:01:46 o/ 20:01:53 #topic update to action items 20:01:58 god SlickNik this is embarrasing 20:02:10 lol 20:02:14 have u done any work w/ LP for group stuff? 20:02:24 ugh, I keep forgetting about this. I'm gonna do it right now. :) 20:02:39 #action SlickNik to check with other teams to set groups permissions correctly on LaunchPad 20:02:46 * hub_cap is removed 20:02:47 ;) 20:02:51 very clever. 20:02:55 ;) 20:03:00 now its your fault if its not done 20:03:03 hahahahah victory 20:03:11 heh 20:03:13 #topic escaping dots in urls 20:03:13 does users could be able to retrigg their builds ? 20:03:16 datsun180b: go 20:03:23 okay 20:03:32 dmakogon_: it has to do with that and modifying bugs / blueprints 20:03:38 so there's a problem when issuing requests that end in a field that is dotted 20:03:49 user calls except create for example 20:04:30 so in case a username/hostname contains a dot, the route mapper has a habit of eating the final extension, thinking it's a file extension 20:04:48 can u give an example? 20:04:56 doesn't user should encode url himself? 20:05:01 this can be a problem when issuing a call like GET /users/foo.bar, because the response ends up being a 404: 'user foo DNE' 20:05:23 o/ 20:05:24 so to get around this we can escape the dots as %2e: "GET /users/foo%2ebar" 20:05:37 ok with that 20:05:45 but to issue this call manually, it must be encoded a SECOND time: GET /users/foo%252ebar 20:06:12 datsun180b: is it python client issue? 20:06:23 This is an issue with the way we use route mapper 20:06:34 well the issue is _cuz_ we use route.mapper 20:06:46 This has been a pain point for Rax users, even ones using curl or similar clients to issue requests 20:07:09 how do we able to fix that ? 20:07:22 It should be done on client side http://www.w3schools.com/tags/ref_urlencode.asp 20:07:32 One way would be to inspect the request object to look for the 'missing' extension 20:07:53 but that would require a whitelist of expected suffixes and limit possible usernames 20:08:07 another approach would be to use unique ids for users that do not contain the username or host 20:08:18 ive ruled on this in the past fwiw, and i think it should not be something we whitelist. its up to the user to deal with 20:08:24 datsun180b: who uses curl for our api? 20:08:28 that sounds painful 20:08:34 lol kevinconway no rules against it 20:08:43 i write al my webapps in bash 20:08:53 hah 20:08:54 bash on bails? 20:08:56 nice 20:08:59 * KennethWilke dies 20:09:02 kevinconway: Curl is the preferred choice for people who want to show examples proving they actually wrote a REST API and not something that requires a special client. :) 20:09:03 but there needs to be a sophisticated way to handle it for customers 20:09:06 well i'm having a hard time speaking for all of our users 20:09:27 so it is you job to encode urls properly, right? 20:09:33 ++ 20:09:40 i'll reiterate another approach: to not refer to users by their name or host in request URL 20:09:47 lets just say its the job of the consumer. i rule on it. 20:09:50 completely sidestep the issue 20:10:00 datsun180b: i agree w/ that somewhat, but the mysql api allows for it 20:10:04 and how else can we delete users 20:10:06 :/ 20:10:14 delete users/user-uuid 20:10:14 if they are their own resource 20:10:15 currently it is not clear how customers with a . name can easily handle it 20:10:23 that would be a v2 change i'm guessing 20:11:02 hub_cap: final word? 20:11:06 NehaV: we should document that 20:11:11 it is ugly, we don't ask google to add extra logic to make incorrect requests working 20:11:14 so, make a conclusion 20:11:20 but we are not going to sacrifice functionlity of the app for this 20:11:21 datsun180b: can i take my 50/50 20:11:43 obvi dmakogon wants us to move on 20:11:57 ;) 20:11:58 it is standard, what we are talking about 20:12:04 ++ 20:12:05 no, Ed said - "final word" ))) 20:12:10 isviridov: it's not standard to double url encode your input 20:12:12 documenting is fine but we are not giving a good experience. this is a common use case to have . in username 20:12:24 especially in ipv4 addresses 20:12:33 NehaV: the other solutions limit the app behavior 20:12:49 you have to blacklist particular users that are allowed in mysql, artificilly limiting behavior 20:13:07 so we either accept that you have to encode, or limit our api 20:13:36 why not just patch routes? 20:13:39 kevinconway: yes, that is why no additional logic of recognition wrong encoding on server side 20:13:47 does it give us the ability to get teh extension? 20:13:52 vipul: i did a while ago, for escaping dots 20:13:54 if so.. we could write some hack that concats 20:14:10 what about url fixing on reverse proxy side if it exists? 20:14:11 we don't have any use case in our api for extensions anyway 20:14:47 i think we need a new daemon to translate user names to the right format 20:14:47 vipul: https://github.com/bbangert/routes/commit/1c5da13b9fc227323f7023327c8c97e9f446353f 20:14:50 for example if i wanted to name a user foobar.json it would be ambiguous 20:15:03 we allow .json in the url? is it meaningful? 20:15:08 yes we do 20:15:11 vipul: apparently it is 20:15:14 its a format string, try it out 20:15:20 that's what the accept is for 20:15:22 hub_cap - isn't there a better way to fix it rather than just documenting and asking to add escape 20:15:25 .xml and .json 20:15:31 it can override the accept headers 20:15:50 NehaV: not that doesnt break the overall functionality of the app for _anything_ that requires dots in urls 20:16:13 hub_cap: +1 20:16:17 well i'll note that we've exhausted 15 minutes of our weekly meeting about this. this discussion may need to continue somewhere or somewhen else with respect to the other issues we have to discuss 20:16:28 to the mailing list! 20:16:42 that's actually a good idea 20:16:44 ML or die 20:16:55 ML or die 20:16:58 +1 20:17:02 ML or pie 20:17:06 ill take pie 20:17:08 moving on 20:17:10 but i'm not in control of this meeting, i just technically have the floor. hub_cap, up to you to move the topic if you like 20:17:27 #topic Provisioning post ACTIVE 20:17:55 vipul: this is u ya? 20:17:57 This is me.. comment on redthrux's patch for moving DNS provisioning 20:18:05 hey put your name on the things u add 20:18:06 things like DNS.. floating IPs.. security groups 20:18:18 aren't needed unless the instnace is usable 20:18:23 which means ACTIVE 20:18:29 so why not provision them after that 20:18:38 about that, we cannot add any resource creation after poll_untill cuz it would brake or limit heat workflow 20:18:47 vipul: I think the issue is they actually are needed 20:19:01 grapex: they are needed even if the instance never goes active? 20:19:02 cuz heat would cover every resources ever imagined 20:19:10 If there is no DNS, the instance can't actually be used. 20:19:19 i believe firmly that if the resources we are supposed to create do not get created, its FAILED 20:19:25 grapex: why is that ? 20:19:33 Sure.. which is why if DNS fails.. then we dont' consider it active 20:19:34 if we need a sec grp or dns, and the instance comes online but dns failed, mark it as FAILED 20:19:38 regardless of whether mysql came up 20:19:40 correct vipul 20:19:43 do we have any cases when user fixes proviosioning parts? Like re-adding floating IP? 20:19:45 FAIL 20:19:47 vipul: dmakogon: Do you mean, why provision that stuff if the instance itself won't provison? Why not put it at the end? 20:19:51 isviridov: no point in that 20:19:55 just delete/recreate 20:20:02 I agree we should put it at the end- maybe 20:20:12 I just think the status should only be ACTIVE if *all* resources have provisioned 20:20:17 ++ 20:20:21 sure.. agreed 20:20:23 so, let us think about it as atomic thing 20:20:27 vipul: So maybe we agree on the point that the resource prov order should be changed 20:20:52 Currently, the status of ACTIVE comes back when the server and volume are ACTIVE and the guest status is also ACTIVE 20:20:59 server -> DB -> sec.gr -> fl.ip -> dns 20:21:13 well 20:21:20 server+db+sec+ip+dns 20:21:23 If we want to move the provisioning of other stuff until after the server provisions, then we'd need to essentially set the active status in the trove database at the end of the task manager provisioning routine 20:21:24 = ACTIVE 20:21:38 this all would work with nova background 20:21:39 I agree on the ACTIVE part (i.e. an instance should go ACTIVE iff none of the component parts fail provisioning), still thinking about the order part. 20:21:49 so the status is async 20:21:56 its sent back from teh guest 20:22:12 order is defined by dependencies, but result is single 20:22:13 so i dont think we need to necessarily wait for that to prov other resources 20:22:16 we can always clean up 20:22:19 hub_cap: Right now the active status checks the server and volume STATUS, plus the service (guest) status, and if they're all ACTIVE it shows the user ACTIVE 20:22:22 Maybe what we do is 20:22:41 in the taskmanager, we wait until everything provisions, and then we save something to the trove DB as a marker saying we beleive all resources provisioned 20:22:57 then the ACTIVE status is whether the Trove taskmanager thinks it finished plus if the service status is ACTIVE 20:23:03 hub_cap: vipul: grapex: how this would work with heat provisioning 20:23:03 couldn't we use task state for that 20:23:05 ?? 20:23:13 vipul: Maybe 20:23:28 so maybe we actually make use of that when aggregating the end user status 20:23:30 dmakogon: Not sure, but I think if we changed that it would be closer to what we need for Heat 20:23:47 taksmanager goes to PROVISION COMPELTE or whatever -> implies ACTIVE 20:24:06 vipul: Or maybe it should be set to NONE- 20:24:17 that actually would be more consistent with the other action code 20:24:26 hub_cap: vipul: grapex: when we are using heat all resources already created with stack, so we cannot manipulate and controll creation of any instance related resources ! 20:24:27 vipul: But I agree with the idea 20:24:37 +1 dmakogon 20:24:43 lets not change it 20:24:48 +1 20:24:49 dmakogon: Ideally heat would be able to provision all this for us, so we wouldn't have to piecemeal provision an instance. 20:24:59 hub_capL totally agree with you 20:25:03 can't you do thinks with HEAT that wait for conditions to be met 20:25:08 dmakogon: The idea isn't that we're dramatically changing it, we're just moving stuff around a bit 20:25:43 Btw, I only think this is necessary if we care about the order of provisioning 20:25:44 grapex: it would brake whole workflow provisioning 20:25:50 I personally don't know if its worth doing now 20:26:08 i'm suggesting to leave it as it is 20:26:17 vipul: WaitConditions 20:26:22 For now, we could let it go to ACTIVE.. and if DNS proviisioning fails, mark as FAILED 20:26:27 dmakogon: I honestly agree. Vipul, what was a big motivation to change the order? 20:26:30 lets solve redthrux's meeting 20:26:32 LOL 20:26:32 I don't think the whole status reporting needs to be solved 20:26:32 issue 20:26:42 vipul: I don't think that would work for DNS 20:26:48 i dont think the whole thing needs to be redone 20:26:51 So redthrux had to miss the meetign due to a doctor appointment 20:26:55 I talked to him a bit 20:26:59 Here is his real problem 20:27:01 lets make sure that we mark a status to failed for dns if it fails 20:27:09 When DNS fails, we set it to a task status that means that DNS fails- 20:27:15 hub_cap: vipul: grapex: forget about DNS, it also would be a part of heat resources 20:27:31 the problem is though, in this state a Delete operation is not allowed. That's the bug we need to fix- it should be possible to delete such instances. 20:27:38 hub_cap: vipul: grapex: dns is not critial resource 20:27:56 heat = magic wand 20:27:58 dmakogon: It's only critical if we want users to be able to use it to log into their databases. 20:28:11 lol @ vipul 20:28:12 vipul: Lol 20:28:13 dmakogon: thats not correct. it is critical 20:28:15 if you need dns 20:28:18 hub_cap: vipul: grapex: mgmt allows to re-create dns record or we could do that by hands 20:28:20 and its part of your workflow 20:28:22 no 20:28:24 no no no 20:28:30 if dns fails your instance fails 20:28:31 period 20:28:37 delete/recreate 20:28:43 and fix your crappy dns system ;) 20:28:44 How about we make an action item to allow DELETE to work on instances in the BUILDING_ERROR_DNS state? 20:28:54 So in the current patch, we tell taskamanger to provision the instance, and immediately go create DNS 20:28:56 grapex: let us fix bug with deleting, instead of introducing new ones 20:29:02 dmakogon: I don't think any of trove's customers need to or want to involve themselves with managing DNS :) 20:29:02 grepex +1 20:29:02 ++ 20:29:26 grapex: agreed 20:29:38 SlickNik: maybe so, so i'm ok with new DNS status 20:29:48 like grapex said 20:29:59 could we talk about security groups review ? 20:30:00 #action Fix bug with DELETE to allow instances in BUILDING_ERROR_DNS state to be deleted. 20:30:11 that's works.. I still think we are unnecessarily provisioning things unless we wait for Gueat to come active 20:30:29 almost everyone already seen it, so i want tot make a conclusion 20:30:30 vipul: I agree 20:30:43 vipul: it just seems like no one wants to fix it right now if Heat is coming 20:30:51 ok moving on? 20:30:58 yes pls 20:31:00 vipul: I think it would be possible in a minimal way, but it sounds like there's not much support... maybe we can talk later. 20:31:02 yep.. revist after HEAT 20:31:14 sounds good 20:31:21 #topic Provision database in specific tenant 20:31:26 #link https://blueprints.launchpad.net/trove/+spec/dedicated-tenant-db-provisioning 20:31:29 isviridov: yer up 20:31:49 so, idea is to make Trove real DBaaS out of the box 20:31:50 can u explain what this means? 20:32:29 Now we are creating db in target tenant quota, but already have own quota management 20:32:48 why not create all instances in f.e. trove tenant? 20:33:08 it's possible.. just need a custom remote.py 20:33:10 im confused, so youre saying submit resources on behalf of a single user? 20:33:14 isviridov: What is the difference between "trove tenant" and "target tenant quota"? 20:33:25 one super-tenant? 20:33:29 that holds all instances 20:33:30 so nova resources are not shown as the users instances? 20:33:37 so, user doesn't care about his quota and doesn't see instances what belongs to trove actually 20:33:39 like a shadow user so to speak 20:34:02 sounds like it 20:34:06 isviridov: because as a provider, if your deployment currently assigns a tenant per user, you now have no way of restricting resources on a developer/project basis? 20:34:16 it means that trove should own personal tenant and which you rule all stuff 20:34:18 hi amcrn! 20:34:21 hi :) 20:34:31 thats now how openstack works though 20:34:43 what problem are you trying to solve? 20:34:47 right, so I'm asking what's the problem he's trying to solve 20:34:53 jinx :P 20:34:57 You'd have the extra overhead of doing _all_ of the quota management yourself. 20:35:02 if you need managed resources in nova 20:35:10 then we should fix nova to allow it 20:35:39 ive spoken w/ the nova team about this (a long time ago) 20:35:49 hub_cap: it is an idea to hide all trove resource management from user a 20:35:52 to provision resources that maybe a user cant see when they list nova isntances, or at least cant use it 20:36:03 like the problem is a user sees nova resources 20:36:07 and can, for instance, change ram 20:36:12 yea if you're running Trove against a public nova endpoint, i see the issue.. 20:36:23 yes this is not a trove problem 20:36:23 you have Trove instances littered in your acct along with compute instances 20:36:34 its a resource viewing / accessing issue in nova 20:36:49 lets talk to nova to see if htey would still allow "shadow"/"managed" users 20:37:00 1 sec 20:37:22 btw isviridov.. if you _really_ want to do this.. just put in a diffrent remote.py :) 20:38:00 look you have got me vipul 20:38:17 just create all instances in trove tenant, not users'one 20:38:39 and handle all quota inside that tenant 20:39:10 so, user uses it as pure dbaas 20:39:24 isviridov: its pure dbaas w/ multiple tenants too 20:39:25 so, all auth stuff could happen under the hood of trove api 20:39:26 for the record 20:39:52 upstream trove supports deployment that creates instances in user's tenant... 20:40:04 you could alwyas change that behavior.. which is the reason why we make it pluggbale 20:40:07 fwiw, this shoud not be fixed in trove... there is no need to have a single global tenant id think 20:40:16 not in upstream 20:40:19 vipul: and not contribue it upstream 20:40:22 vipul: ++ 20:40:55 managed instances will be the best approach in nova 20:40:56 hub_cap: vipul: is it possible to make it configurable 20:40:57 ? 20:41:04 i dont think its necessary.. 20:41:10 all of openstack behaves one way 20:41:15 we should stay standard 20:41:24 anyhow we don't give access to db instances for user 20:41:35 why he should see and manage all that resources? 20:41:46 again this goes back to "what is the problem you are trying to solve" 20:41:55 if the problem is users can see your nova resources 20:42:04 dmakogon: it is today 20:42:06 then teh problem is that we need to fix it in nova 20:42:32 vipul: what ? 20:43:01 dmakogon: It is already configurable.. how you decide to spin up nova resources 20:43:01 and other projects as well.. since the same applies for cinder or designate for example 20:43:08 ++ 20:43:26 also there's a caveat with using single tenant when you want to create tenant networks 20:43:42 say for private tenant network for a cluster 20:43:42 vipul: i mean to make it resources per user tenant and resources per trove tenant 20:43:55 rnirmal: ++ 20:44:09 it will be a problem 20:44:22 dmakogon: Yes that's all driven by whether you pass down the auth-token to NOva or obtain a new one for the shadow tenant 20:44:35 we should move on 20:44:37 there are more things 20:44:38 yes 20:44:43 let us 20:44:51 vipul: oh, i got it 20:44:53 yup 20:45:31 #topic Provisioning several db processes on the same cluster 20:45:38 #link https://blueprints.launchpad.net/trove/+spec/shared-cluster-db-provisioning 20:46:13 ok im not a fan of this one either 20:46:16 :) 20:46:16 weve talked about it in the past 20:46:31 It is about cloud utilization. The idea to host several database daemons on the same cluster 20:46:32 if you have too many spare cycles on your nodes, your nodes are too big 20:46:35 ) 20:46:44 shrink the nodes if you have extra utilization 20:46:47 and prov a new cluster 20:46:48 hub_cap: +10000000 20:47:02 upgrades, guest status, and many other things would not be easy 20:47:08 Not extra, but idle 20:47:12 and create more vms where necessary.. or containers..they are cheap 20:47:18 ++++++++++ rnirmal 20:47:19 I think this is a bad idea in general. 20:47:25 so you want to have one instance run multiple guests and multiple db engines? 20:47:26 cloud is cheap 20:47:35 kevinconway: thats what he was proposing 20:47:39 i could see a situation once baremetal is supported, that you'd want multiple processes (say Redis) on a single machine to avoid wasting hardware, but other than that, i'm with hub_cap/rnirmal/etc. 20:48:04 what is active? active_cassandra, active_mongo, some status thereov? 20:48:14 my mongo is up but my cassandr is down 20:48:14 what does that mean 20:48:19 or 2 clusters 20:48:29 yeah we've definitely talked alot about this in the past and the general concensus has always been one container does one thing 20:48:43 does one user access all the engines? 20:48:44 amcrn: even in that case wouldn't it be better to use an lxc driver to partition the instances and resources? 20:49:00 kevinconway: different 20:49:05 juice: fair enough 20:49:12 isviridov: I feel like for flexibility, this *should* have been built into Trove. But as you can see no one likes it. :) 20:49:28 amcrn: assuming lxc driver works with nova :P 20:49:31 juice / amcrn: yup, even if baremetal were supported, I'd push for having these agents run in separate containers on the bare metal node so some isolation exists. 20:49:34 * hub_cap spills some coffee over grapex's keyboard 20:49:44 looks it will be in future or substituled witk containers like dokit 20:49:55 thanks, let's move on 20:50:10 #topic Auto-recovery of node in replica/cluster 20:50:18 #link https://blueprints.launchpad.net/trove/+spec/auto-recovery-of-cluster-replica 20:50:21 now i like this! 20:50:29 Finally ^) 20:50:34 hahah 20:51:06 what types of metrics woudl we push to ceilometer 20:51:14 yes, as we discussed earlier, autorecovery/failover is one of the goals for IceHOuse 20:51:31 and how does ceilometer notify Trove to do sometihng 20:51:41 vipul: specific database related metrics 20:51:46 vipul: it depends on the type of cluster and specific for db 20:51:52 Bender: C'mon, it's just like making love. Y'know, left, down, rotate sixty-two degrees, engage rotors... 20:52:08 i agree w cp16net 20:52:13 cp16net: nice one))) 20:52:13 lol 20:52:14 vipul: celiometer has Alerts mechanism which is used in HEAT for autoscaling 20:52:16 sounds like someone else needs coffee on their keyboard 20:52:24 * hub_cap throws a random number of Ninja Turtles at cp16net 20:52:36 D20 20:52:47 ok srsly though, i think that we can discuss the approach offline. i do like the idea though 20:53:01 hub_cap +1 20:53:01 ML is nigh 20:53:05 how about doing this for a single instance? 20:53:09 yes ML++ 20:53:13 why does it need to wait for clusters 20:53:34 lets table this 20:53:38 i reallyw ant to discuss the next one 20:53:38 vipul: as first step - maybe, what about down-time and data consistency ? 20:53:44 I like the idea as well. But I think a prereq for it would be to have some sort of cluster / replica implemented no? 20:53:58 it's down anyway.. if it needs to be recovered 20:54:10 from backup f.e. 20:54:19 recovery = spinning new instance 20:54:21 btw I would like to open the trove channel for replication/cluster api discussion after this meeting and tomorrow morning when everyone gets in 20:54:28 the criteria should be defined for single instance 20:54:50 imsplitbit: would love to join 20:55:02 vipul: I like the idea but, I'm not sure what would be the recovery in case a single instance is down? 20:55:14 restore from last known backup 20:55:18 which is all you can do 20:55:22 yes 20:55:25 SlickNik : provisioning new one 20:55:46 i'm just saying there is opportunity to prove this with a single instance.. before getting crazy with clusters 20:55:46 heyo 20:55:51 #table it 20:55:52 to substitute 20:55:55 #move on 20:55:57 fine! 20:55:59 :P 20:56:03 #topic Lunchpad meetings 20:56:06 I like it, but table for later. 20:56:09 crap i menat to fix that typo 20:56:13 *meant 20:56:16 The last from my side 20:56:18 heh 20:56:31 #link https://launchpad.net/sprints 20:56:37 did the openstack bot die? 20:56:41 good idea 20:56:42 We are in different timezone, let us arrange the topic meetings 20:56:56 oh noes 20:56:58 looks like it 20:57:00 no bot?! 20:57:01 scheduling is always good way 20:57:08 hub_cap: how it can help with scheduling? 20:57:21 what is this link for 20:57:30 are we having a gathering? 20:57:31 im not sure. im curios to know if we need it 20:57:42 vipul: for creating public visible meeting 20:57:59 so like this meeting? 20:58:01 hub_cap: yes, we could plan discussions 20:58:03 isviridov: other than this one? 20:58:14 What's wrong with #openstack-meeting-alt for that? 20:58:16 "Launchpad can help you organize your developer sprints, summits and gatherings. Register the meeting here, then you can invite people to nominate blueprints for discussion at the event. The meeting drivers control the agenda, but everyone can see what's proposed and what's been accepted." 20:58:18 kevinconway: yes, but by the topic 20:58:19 monday = cluster API , tue = failover 20:58:26 I imagine it's a replacement for the wiki you're using 20:58:32 oh no… meetings every day? 20:58:36 (that holds the topics, last weeks notes, etc.) 20:58:46 amcrn: forget about description, it is just a tool 20:58:50 how is this different from just being in #openstack-trove ? 20:58:59 and sayding "lets discuss tomorrow at 10am" 20:59:00 here's an example of a sprint: https://launchpad.net/sprints/uds-1311 20:59:02 kevinconway: if you want to be involved everywhere 20:59:13 hub_cap: we can come together and discuss specific topic 20:59:20 i hate launchpad.. 20:59:29 this is a ML topic 20:59:29 so the less we do the better IMHO :P 20:59:43 isviridov: how do we not do that _in_ the room? 20:59:45 does it allow for async? 20:59:53 vipul made good conlusion for this topic: "I have LP" 21:00:08 if it requires us to be somewhere, virtually, at the same time 21:00:09 s/have/hate 21:00:16 i dont se what it buys us over just being in the channel 21:00:21 So like amcrn said, it would only be a replacement for the meeting agenda page on the wiki, if anything… 21:00:22 and for async, we have the ML 21:00:29 ahh, yep, openstack (the meetbot) is missing his ops hat. fixing 21:00:34 <3 fungi 21:00:37 ill endmeeting after 21:00:40 thanks fungi! 21:00:53 #endmeeting