21:03:01 #startmeeting scientific-wg 21:03:02 Meeting started Tue May 17 21:03:01 2016 UTC and is due to finish in 60 minutes. The chair is b1airo. Information about MeetBot at http://wiki.debian.org/MeetBot. 21:03:03 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 21:03:05 b1airo: hello 21:03:05 The meeting name has been set to 'scientific_wg' 21:03:13 Greetings! 21:03:22 o/ 21:03:24 Hi 21:03:31 ah g'day there you are - the list of lurkers here is ridiculous 21:03:39 hello 21:03:52 b1airo: if you use the command #chair oneswig the Stig can also be chair 21:03:53 b1airo: you are chair since you started the meeting 21:04:04 thanks anteaya 21:04:12 welcome 21:04:13 #chair oneswig 21:04:14 Current chairs: b1airo oneswig 21:04:19 ooh, the power 21:04:33 don't go all voldemort on us 21:04:37 now it will recognize commands from either of you 21:04:41 o/ 21:05:05 #topic roll-call 21:05:10 o/ 21:05:11 * dfflanders yawns 21:05:17 * dfflanders waking up 21:05:18 \o/ 21:05:30 \o/ 21:05:34 any IRC first timers here for scientific-wg 21:05:35 ? 21:05:44 kinda 21:05:45 I am from sanger 21:05:53 not new to IRC but new to the OpenStack IRC meetings 21:05:55 certainly not an old hand at this 21:05:58 ditto here 21:06:03 if you can shout out hello that'd be good - so we get an idea of who is actually participating 21:06:12 rbudden: me too 21:06:22 hello 21:06:24 g'day! 21:06:27 Hello 21:06:48 hello 21:06:49 Ready? 21:06:50 b1airo: yeah, I usually lurk in openstack-ironic 21:07:26 #topic Newton cycle activity planning and brainstorming 21:07:41 well this is my first time on IRC in months, was using it for our cloud here in australia but then moved over to slack so that the managery folks could get engaged easuer 21:07:45 *easier 21:08:26 first question / discussion point 21:08:39 very common usecase 21:08:45 We have two meeting times, which is going to make it tricky to reach a split-brained consensus on things 21:09:05 But we'll converge I expect 21:09:18 is anyone aware of any specs we should be cognisant of? 21:09:18 folks who want to agree find a way 21:09:43 well this group keeps mentioning the scheduler in nova 21:09:48 is anyone following that work? 21:09:56 or attending the scheduler meetings? 21:10:17 i have to admit i have not yet done my usual post-summit spec perusal 21:10:17 Not me but I'm interested in the scheduler's revised treatment of Ironic 21:10:42 not following yet, but had some discussions at the summit about it 21:10:57 I'm also interested in the work on Ironic serial console support, while we are there 21:11:08 okay well whoever is interested in tracking scheduler decisions make themselves known so I can help you find meetings and specs 21:11:13 A long-standing spec seems to be finally getting traction 21:11:40 the short story I was told is that they are aware a slew of issues and a major rewrite is under way 21:11:58 but i’d defer to their meetings for the full truth ;) 21:12:04 for instance here is the scheduler team meeting info 21:12:16 #link scheduler meeting http://eavesdrop.openstack.org/#Nova_Scheduler_Team_Meeting 21:12:32 anteaya: thanks for that 21:12:40 thanks 21:12:50 welcome 21:13:00 i think where we can add value here as a working group is in ppl discussing issues with the specs in this forum and then we can take that to the project on the mail-list or what have you. the problem i've found with this in the past is that, whilst you can comment on specs easily enough, you're just one little voice that (probably) no-one on the dev team knows. 21:13:29 and of course gerrit is not that great for conversations 21:13:41 +1 this is the purpose of the WG having task-forces per cycle 21:13:49 I agree, I've found most issues raised from here have common cause with many others 21:13:51 when you read the meeting minutes anything with #link stands out 21:14:03 b1airo: agreed 21:14:32 ok so - 21:14:33 b1airo: I've also noticed that in the past, groups that meet regularly on IRC gain traction slowly over time if they're consistent, which means that little voice turns into a much larger and well-respected voice. 21:14:50 #action ALL: raise specs of interest to the working-group on the mailing list 21:14:50 #agreed collective input on specs is good 21:14:58 as to which of the task-forces in this WG does the scheduler work align to? 21:15:21 plus you can link to meeting logs as they are archived 21:15:22 so that helps to show that a group has made a decision 21:15:22 or is in accord 21:15:22 dabukalam: very much so 21:15:40 dfflanders: no clear one - but accounting might be closest 21:15:50 also ironic 21:15:58 nuage here - interested in ironic / baremetal use cases 21:16:02 there are definitely scheduler issues with ironic 21:16:25 hi christx2 21:16:31 hi 21:16:39 important that we have a clear statement for what we are jointly representing on behalf of the user. 21:16:48 scientific users 21:17:05 slash research users. ;) 21:17:11 dfflanders: are you raising "mission statement" as a point of discussion? 21:17:29 Shall we go through the four tasks in turn 21:17:39 +1 21:17:48 one item of house-keeping first 21:17:54 go ahead 21:17:57 dfflanders: well for starters I think it is important for those interested to start tracking the existing work 21:18:11 task-tracking - is everyone happy to use trello for the moment? 21:18:17 setting up barriers to doing that actually slows things down 21:18:27 blairo +1 21:18:35 (we can defer to established infra practices where appropriate, but storyboard seems a bit overkill for us at this point) 21:18:49 I am happy with trello for tracking where we are 21:18:58 I don't think we'll overflow it 21:19:02 I dislike trello however.... if that is the standard 21:19:12 i suspect it'll mainly be used by oneswig and i anyway 21:19:41 james__: it isn't 21:19:48 yes good way for chairs to keep track of task-force progress 21:20:07 more of a chair coordination thing, but useful to ping people assigned to tasks and gather extra details in one place 21:20:17 can we find other words that don't militarize the effort? 21:20:46 anteaya: glad to let's discuss on the user-committee mailing list. 21:20:55 or here 21:20:56 Given Trello has zero setup effort, little is lost if we decide to drop it again 21:21:03 the point of the meeting is to discuss thing 21:21:04 s 21:21:13 sorry anteaya, which words? 21:21:19 not just punt everything to a mailing list 21:21:20 task force 21:21:26 ah right 21:21:34 I don't think the effort needs militarizing 21:21:37 sub-team ? 21:21:45 activity? 21:22:00 b1airo: that works 21:22:01 red-wing ;-) 21:22:01 oneswig: so does that 21:22:01 activity sub-team?? 21:22:10 sure 21:22:25 any others? 21:22:41 i like "activity" 21:22:45 I don't know what red-wing means 21:22:45 I think blackbird on that 21:22:59 also noone needs to be blessed to attend any meeting or track and comment on any spec 21:23:15 time check 21:23:24 so please don't feel you need permission from anyone to do so, if you are interested in a thing, track it 21:23:24 #agreed we'll call the activities "activities" from here on 21:23:41 so is the scientific meeting still going? I'm a bit late 21:23:48 #topic User Stories 21:23:50 Hi Mike 21:23:52 hi jmlowe - yep 21:23:58 whew 21:23:58 hello Mike 21:24:03 Hey Bob 21:24:06 jmlowe: welcome, can you read the channel topic in your irc client? 21:24:21 join 21:24:23 jmlowe: that is a good way to see what activity is currently ongoing in a channel 21:24:27 ah, yeah, just noticed 21:24:36 thanks 21:24:36 It seems the foundation's got some doubts about the wiki 21:24:38 jmlowe: awesome 21:24:49 oneswig: not the foundation 21:24:52 Is there a more appropriate way of storing information? 21:24:54 the infra team 21:24:55 the infra team 21:24:56 which maintains the wiki 21:25:01 anteaya: right 21:25:06 use the wiki for now 21:25:22 boas@systemfabricworks.com join 21:25:23 anteaya: was there a decision or still collecting feedback on wiki uses? 21:25:37 Hello Bill 21:25:44 my first question re. user stories is, is there a standard we need to follow - i assume there is just a repo somewhere with an RST template and away we go? 21:26:02 Several levels of standards I think 21:26:05 Hi Stig 21:26:05 for the user story, the product wg has a template and repo 21:26:05 hi Bill (qwebirc65959) 21:26:16 I need to read some of them - should have prepared 21:26:36 hi Blair 21:26:37 we just have a long term direction of finding information sharing tools that are less attractive to spammers 21:26:38 the decision was to take the next year and find tools that are less attractive to spammers 21:26:38 oneswig: so use the wiki as you need to 21:26:41 leong, ++ You can start by using the Prod wg template 21:26:41 #action oneswig to read user stories and understand their scoping 21:27:00 here's a link the the product wg user story repo 21:27:10 #link product_wg user story repo: https://github.com/openstack/openstack-user-stories/tree/master/user-stories/proposed 21:27:15 oneswig, yes i think it's the scoping that i'm wondering about too 21:27:19 leong: thanks 21:27:30 that will help us determine what stories we might contribute 21:27:32 the product wg follow the openstack development flow and using gerrit to track 21:27:45 But there are also reference architectures, which are more detailed, right? 21:27:53 the template can be found here: 21:28:00 i get the feeling it's closer to personas that specific use-cases, but just guessing really 21:28:04 #link product_wg user story template: https://github.com/openstack/openstack-user-stories/blob/master/user-story-template.rst 21:28:30 Enterprise WG is working on a series of Reference Architectire 21:28:57 Each activity has an etherpad ready to go right? was the intention to distill info into those? eg. 21:29:03 leong: thanks, I'll look for those too and report back 21:29:05 #link stories https://etherpad.openstack.org/p/scientific-wg-austin-summit-stories 21:29:13 #link https://github.com/openstack/openstack-user-stories/blob/master/HACKING.rst looks like a good place to start reading oneswig 21:29:50 ptrlv: that's one option for sure 21:29:53 b1airo: thanks 21:30:03 sorry all, had a family A&E visit so missed the first half of this meeting 21:30:18 if someone can pass the nova scheduler issues with ironic on, a review item - that would be great 21:30:20 dc_mattj: yikes! 21:30:34 nothing serious in the end, but obviously takes hours 21:30:51 dc_mattj: thanks for making it 21:30:57 /o\ dc_mattj 21:31:00 we are doing work on provisioning networks to ironic baremetal instances 21:31:07 thanks 21:31:36 ptrlv: are you thinking for planning here and now or tracking as we go along? 21:31:54 christx2: don't think we've got on to that yet 21:32:28 christx2: what action did you want there - "raise nova-scheduler issues with ironic for further discussion on mailing list" ? 21:32:44 ok, let's hold it and finish user-stories 21:32:57 Not sure about the rest of you, but thinking we should look for use cases with community wide traction. How do we discuss/determine which these are? 21:33:05 sorry if you guys have already done this, but do you have an open ether pad I can refer to ? 21:33:25 if we can lets use links to git.openstack.org 21:33:25 git.openstack.org is the infra supported servers 21:33:25 github is propriatary and we don't have control over their decisions 21:33:25 so links to git.openstack.org is preferred please 21:33:26 christx2: what do you mean? 21:33:30 LyleWinton: community wide meaning scientific community wide? 21:33:35 oneswig: I was thinking the main etherpad was a mess and we could summarize things a little more coherently. Dunno, what did you intend those other pads for? 21:33:36 LyleWinton: I think every use case I've heard of has some differences but much in common 21:33:38 i think we have LyleWinton (in the summit meetings) - but we can be always open to new stuff if there are ppl willing to work 21:33:42 Yep 21:33:58 LyleWinton: it might come down to who comes forward to document their case 21:34:06 oneswig: cool, will hang back until we do.. 21:34:23 LyleWinton: Yep to what? 21:34:24 Sure, happy to let it evolve. 21:34:43 (yep to "community wide meaning scientific community wide?") 21:34:48 ptrlv: actually that's another action i think - moving some etherpad content to wiki and also creating a parking lot 21:35:03 Etherpads: it's a good idea ptrlv, we can revive those from the summit session 21:35:06 LyleWinton: ah thank you 21:35:37 LyleWinton: if you are replying to a specific querant it helps to use the querant's name 21:35:47 for instance for you in my client I type Ly and hit the tab button 21:35:59 oneswig: probably good to start some new ones with the main points as topics, and with the summit ones linked 21:36:00 and you name autocompletes 21:36:04 anteaya: will do. IRC noob 21:36:05 LyleWinton: so you know I am replying to you 21:36:08 we should make it a goal of this cycle to not create the (effectively) the same etherpad again in Barcelona (which seems to happen more than it should) 21:36:23 LyleWinton: yup, I know, glad you are here, happy to support the learning process 21:36:32 blairo: +1 21:36:43 There's an action outstanding for solidifying some of the etherpad notes into wiki statements, can anyone take it (or shall I?) 21:36:45 blairo +1 21:36:55 i will 21:37:05 b1airo: good man 21:37:23 #action b1airo to "solidify" etherpad notes to wiki 21:37:23 #action b1airo to collect notes from etherpads from sessions into WG wiki 21:37:29 lol 21:37:31 split brain! 21:37:36 look at that for teamwork 21:37:39 you drive :-) 21:37:40 well, there are two main etherpads 21:37:43 if one of you does #undo the last command is removed from the minutes 21:37:54 how to arbitrate? 21:38:05 I will 21:38:06 rock, paper, scissors ...? ;-) 21:38:10 vote 21:38:14 #undo 21:38:15 Removing item from minutes: 21:38:19 phew 21:38:25 that was a close one 21:38:29 OK, next item? 21:38:34 kinx 21:38:35 One more agreement coming from the Austin friday morning face to face: We'd like to promote our scientific cloud profiles but we were unsure where. 21:38:37 *jinx 21:38:51 #link austin-pad https://etherpad.openstack.org/p/scientific-wg-austin-summit-agenda 21:39:09 LyleWinton, i think that'll come with a new page on openstack.org 21:39:17 LyleWinton: Is this the register of community clouds? 21:39:21 amirite dfflanders ? 21:39:39 however, we could get started now with a wiki page 21:40:15 There were 2 thoughts. First https://etherpad.openstack.org/p/science-clouds which we updated. The second, more coming from flanders afterwards, was to work on a new Marketplace category for community/scientific could listsing. 21:40:27 LyleWinton you mean something like http://www.openstack.org/enterprise 21:40:31 There was a short list gathered, but it will need details on access, availability etc 21:40:40 ok cool. i'm gonna take that into my wiki update action then 21:40:55 leong: great page! yes! 21:41:00 e.g. http://www.openstack.org/scientific? 21:41:04 #action b1airo to wiki-ify #link https://etherpad.openstack.org/p/science-clouds 21:41:09 Like https://www.openstack.org/marketplace/public-clouds/ 21:41:29 ok, we need to move onto bare-metal 21:41:35 When does a publ 21:41:43 public cloud become a science cloud? 21:41:46 #topic bare-metal 21:42:09 leong: that would also be cool! 21:42:25 oneswig, good question... maybe when it is not-for-profit ? 21:42:35 Its to do with the work load I would say 21:42:38 A science cloud has community access restrictions. 21:42:46 b1airo: oh I think that would be a ball of yarn 21:42:55 science is more cpu heavy than most web serving for example 21:43:10 LyleWinton, yes good point 21:43:18 b1airo: I think have you and oneswig decide what goes on the page for now and then build criteria from that 21:43:19 There is commericial and accedemic science clouds 21:43:25 i think there are two things: if you are talking about offering a scientific cloud for people to use, that might fall under the marketplacce 21:43:27 OK, so there could be distinctions 21:43:39 Ironic anyone? 21:44:00 christx2? 21:44:13 we use Ironic 21:44:16 I'm restating, I'm interested in tracking the serial console work. Had some good discussions with the chameleon team wrt this at the summit 21:44:17 so I’m interested 21:44:19 james__: There's commercial use of cloud from scientific, but that's different. I think we have to keep it a simple definition. 21:44:21 hi 21:44:48 oneswig: link to serial console work? 21:44:49 primary use cases are network provisioning to a baremetal instance 21:45:15 oneswig: the chameleon team? 21:45:28 #link serial console etherpad https://etherpad.openstack.org/p/ironic-newton-summit-console 21:45:33 i heard earlier that ironic is broken, so i'm curious who is using it and what network topologies they use 21:45:39 One worry about ironic is in the clean up 21:45:42 Chameleon is an NSF project, bare metal research cloud 21:45:51 christx2, which part of the network provisioning? tenant and provider, or provisioning network? 21:45:56 oneswig: us public clouds will take anyone who gives us money 21:46:00 tenant 21:46:02 christx2: can you expand on 'I heard earlier that ironic is broken'? 21:46:26 on the channel earlier a comment was made "nova scheduler issues affecting ironic" 21:46:31 +1 to "broken" for tenants. 21:46:48 dc_mattj: Us community clouds will take anyone from the community... and possibly who give us money... ;) 21:46:57 LyleWinton: lol 21:47:08 the bare-metal/ironic discussion in austin was mostly rbudden's experience 21:47:10 So what I know about tenant networking in Ironic is that you need a provisioning network which probably has to be flat. Additional networks can be segmented (vlans) 21:47:17 i don't know really anything current about ironic, but i'm surprised by the notion that tenant networks are "broken", there must be more to it 21:47:18 christx2: we have nova scheduler issues that cause issues provisioning our cluster 21:47:34 rbudden: I recall much interest from other so maybe you cansummarize your stuff in more detail 21:47:35 okay, is this documented and which release of openstack? 21:47:40 oneswig, surely segmentation requirement goes without saying 21:47:41 There was a talk on it at Tokyo, I'll dig for it 21:47:53 (assuming you want tenant networks for isolation in the first place) 21:47:59 yep, the vlan segmentation is what we are targeting with what we do 21:48:03 I have heard things around accounting, cleaning the machine out. 21:48:06 its quite interesting there seems to be a lot of this around ironic - is there a definitive document which says what works, what's broken etc. ? 21:48:19 ptrlv: main overview at PSC is that we have an 800+ node HPC cluster being managed by Ironic 21:48:20 okay, got it. the HW we use is HP and interfacing ilo 21:48:23 hypervisor support matrix? 21:48:39 is it still the case that everything has to be in the same L2 domain ? 21:48:54 rbudden: yeah, more thinking that you hit problem which others hit too 21:48:55 For example the clean out I have heard doesn't work on "standard" ilo's 21:48:55 #link tenant network isolation in ironic https://www.openstack.org/videos/video/tokyo-1929 21:49:03 ptrlv: we’ve hit some issues with the largest issue being the nova scheduler and race conditions when trying to simultaneous provision large portions of the cluster 21:49:09 rbudden, cool! but i guess you don't have ironic managing any tenant isolation 21:49:21 no, we aren’t using Neutron 21:49:23 just for your own provisioning management? 21:49:32 dc_mattj: unfortunately there is only one document - python sources 21:49:42 unfair 21:49:44 then you run a regular batch-system over it? 21:50:03 oneswig: thank you 21:50:04 christx2: on which channel? 21:50:04 jroll: ^^ 21:50:05 b1airo: thank you, I also am curious about details on this assertion 21:50:07 I'm still not clear where the asserstion came from that anything is broken 21:50:19 b1airo: correct, we run Slurm as the default for doing batch scheduling, then use Puppet to dynamically configure the nodes into things like separate Hadoop clusters, etc. when necessary 21:50:22 science is also code word for pets. ;) so we also want to encourage good cloud dev habits. acknowledging that there will be a period of adoption/pain as app deployment models migrate. 21:50:26 Comments from the manchester meetup for me. 21:50:31 * jroll listens 21:50:37 jpr, yes re. pets :-) 21:50:38 b1airo: also use Slurm to spin up Nova Computes if we need VMs on Bridges PVT 21:50:47 anteaya: not sure what you mean 21:50:50 Ironic is mainly for reimaging the machine 21:51:01 jroll: apparently someone somewhere said ironic is broken 21:51:13 yes, on this channel 21:51:14 jroll: so now christx2 is repeating that ironic is broken 21:51:15 Re: what is a science cloud? I think what many think of as science clouds are clouds that provide resources to run scientific investigations. So, science "loads." 21:51:22 jroll: but I haven't found out any details 21:51:31 There are lots of simulations that don't use pets, though. 21:51:32 christx2: during this meeting? 21:51:34 christx2: what's broken? I see things about networking 21:51:36 b1airo: we don’t reimage frequently, mainly due to the nova scheduler issues and it taking 1.5 days to reimage the entire cluster ;) 21:51:38 yeah 21:51:41 let me scroll bac 21:51:59 christx2: we have some... limitations, yes. we're working on it making it better 21:51:59 christx2: jroll is the ironit ptl (project team lead) 21:52:03 the issue i have with deploying ironic in a multi-tenant environment is with disconnecting the provisioning network once the instance is booted 21:52:07 jpr: "science is code for pets". Harsh dude. Sure, we're not advanced, but our codes include decades of experimental and methods validation. 21:52:17 jroll: hi - follow up on the discussion re: flaky bmcs - is there anything operators could do to help with taxonomy of problems? 21:52:22 Ironic issues in ensuring that the machine is cleaned between tenants. 21:52:22 jroll, can only use a flat network? L2? 21:52:38 lylewinton, understood and not meant to be harsh. totally recognize what's going on there. 21:52:59 requires switch orchestration, and generally all our management/provisioning gear is on a different part of the network with FEXes and stuff that aren't supported in Neutron 21:53:05 rockyg: yes, L2 or L3, doesn't matter, but data plane and control plane must be the same network today 21:53:09 b1airo 21:53:09 christx2: what action did you want there - "raise nova-scheduler issues with ironic for further discussion on mailing list" ? 21:53:40 there is value in those pets but more so as image definitions rather than specific instances. that's where the value of a curated execution environment comes in. 21:53:46 oneswig: someone who's using ironic should take an action to document that stuff for the wider community 21:53:50 oneswig: JayF is working on a spec for handling bmcs better, you might talk to him 21:54:05 jroll: OK thanks, I'll look out for him 21:54:06 s/better/more automagically/ 21:54:08 jroll, ^^what blairo said 21:54:21 #action oneswig to raise BMC failure modes with JayF 21:54:53 dc_mattj: i have logs of some of the nova scheduler issues, along with other problems i’ve encountered. my personal goal is to file some bugs and have some fixes for what i’ve seen and tested 21:54:55 #action discuss/identify current working and problem use-cases for ironic in research/hpc 21:55:02 oneswig: BMC's generally are a horror of inconsistency - see recent posts to the ops list 21:55:14 that LCA video was great 21:55:16 dc_mattj: saw them, and wept 21:55:37 jpr: don't worry, not taken too harshly. On our cloud, several thousand users, probably 50 embracing new cloud architecture. So expert community building and supporting new stack development are key parts of our strategy. 21:55:42 ok, can we squeeze in HPFS quickly or shall we leave it for next week 21:55:45 oneswig: he spends a lot of time in the #openstack-ironic channel 21:55:48 jroll: thanks for joining on short notice 21:55:48 jroll: I appreciate it 21:55:48 oneswig: this is why making something like ironic work in any kind of general way across hardware platforms is horrific 21:55:48 jroll: :) 21:56:09 anteaya: very welcome 21:56:10 anteaya, jroll ++ thanks both of you 21:56:11 +1 21:56:14 jroll: thanks 21:56:25 to all of you: feel free to jump in the ironic channel whenever you want to chat more 21:56:37 we are almost out of time. Any other business to raise? 21:56:41 b1airo: are we doing to have open discussion today? 21:56:42 that is #openstack-ironic 21:56:57 jroll: i’ve been lurking, plan on becoming more active and getting involved in some dev as time permits 21:57:01 I have an item for open discussion 21:57:10 #topic AOB 21:57:12 rbudden: awesome :) 21:57:13 anteaya, i think we have had open discussion all the way through :-) but if there's anything else...? 21:58:00 send the etherpad plz 21:58:17 out to the group ;0 21:58:19 * waiting in anticipation for anteaya's item * 21:58:19 #action b1airo to setup trello and post details for anyone interested in watching 21:58:31 or trello is fine 21:58:44 oh okay well I'll wait until after this topic 21:58:45 LyleWinton: oh okay 21:58:45 well I'll just charge ahead then 21:58:45 is there a new etherpad I've missed ? 21:58:46 LyleWinton, i think it was just a reminder of usual formality 21:59:00 have we been following an agenda for this meeting? 21:59:14 b1airo: Oh, sorry, misunderstood 21:59:16 Well kind of, half way through it 21:59:23 anteaya, a very loose one 21:59:33 so you guys are going to use trello as opposed to etherpads ? 21:59:36 we'll tighten it up for next week 21:59:42 #link Agenda was https://wiki.openstack.org/wiki/Scientific_working_group#IRC_Meeting_May_17th_2016 21:59:47 okay in future after the meeting is begun can we link to the agenda? 21:59:50 yeah like that 21:59:55 dc_mattj, no not instead, but for tracking tasks yes 21:59:57 only at the start? 21:59:59 just at the beginning of the meeting next time? 22:00:08 oneswig: yes 22:00:08 thanks 22:00:13 thanks everyone 22:00:14 i have noticed ppl using etherpads as task trackers and it looks... messy? 22:00:15 blairo: ok cool, will check it out 22:00:28 thank you 22:00:31 blairo: that certainly can be true 22:00:36 great first meeting 22:00:36 well done 22:00:39 +1 22:00:45 thanks all! 22:00:46 +1 22:00:48 thanks everyone 22:00:54 until next time 22:00:54 +1 22:00:55 +1 and thanks for all the mentoring, anteaya! 22:00:56 +1 thanks and bye 22:00:59 #endmeeting