Friday, 2017-08-11

*** esikachev has joined #openstack-sahara00:08
*** cpusmith has quit IRC00:11
*** esikachev has quit IRC00:12
*** iwonka has quit IRC00:20
*** openstackgerrit has joined #openstack-sahara00:55
openstackgerritOpenStack Release Bot proposed openstack/sahara stable/pike: Update .gitreview for stable/pike  https://review.openstack.org/49272600:55
openstackgerritOpenStack Release Bot proposed openstack/sahara stable/pike: Update UPPER_CONSTRAINTS_FILE for stable/pike  https://review.openstack.org/49272700:55
openstackgerritOpenStack Release Bot proposed openstack/sahara master: Update reno for stable/pike  https://review.openstack.org/49272800:55
openstackgerritOpenStack Release Bot proposed openstack/sahara-dashboard stable/pike: Update .gitreview for stable/pike  https://review.openstack.org/49272900:56
openstackgerritOpenStack Release Bot proposed openstack/sahara-dashboard stable/pike: Update UPPER_CONSTRAINTS_FILE for stable/pike  https://review.openstack.org/49273000:56
openstackgerritOpenStack Release Bot proposed openstack/sahara-dashboard master: Update reno for stable/pike  https://review.openstack.org/49273100:56
openstackgerritOpenStack Release Bot proposed openstack/sahara-extra stable/pike: Update .gitreview for stable/pike  https://review.openstack.org/49273200:56
openstackgerritOpenStack Release Bot proposed openstack/sahara-extra stable/pike: Update UPPER_CONSTRAINTS_FILE for stable/pike  https://review.openstack.org/49273300:56
openstackgerritOpenStack Release Bot proposed openstack/sahara-image-elements stable/pike: Update .gitreview for stable/pike  https://review.openstack.org/49273400:56
openstackgerritOpenStack Release Bot proposed openstack/sahara-image-elements stable/pike: Update UPPER_CONSTRAINTS_FILE for stable/pike  https://review.openstack.org/49273500:56
*** esikachev has joined #openstack-sahara01:09
*** esikachev has quit IRC01:14
*** ukaynar has joined #openstack-sahara01:25
*** shuyingya has joined #openstack-sahara01:29
*** ukaynar has quit IRC01:55
*** ukaynar has joined #openstack-sahara01:56
*** ukaynar has quit IRC02:00
*** tellesnobrega has quit IRC02:18
*** openstackgerrit has quit IRC02:21
*** ukaynar has joined #openstack-sahara02:22
*** tellesnobrega has joined #openstack-sahara02:28
*** ukaynar has quit IRC02:32
*** ukaynar has joined #openstack-sahara02:32
*** ukaynar has quit IRC02:37
*** ukaynar has joined #openstack-sahara02:39
*** dave-mccowan has quit IRC03:06
*** esikachev has joined #openstack-sahara03:11
*** esikachev has quit IRC03:15
*** esikachev has joined #openstack-sahara04:11
*** esikachev has quit IRC04:16
*** tnovacik has joined #openstack-sahara04:47
*** Poornima has joined #openstack-sahara04:51
*** tnovacik has quit IRC05:09
*** esikachev has joined #openstack-sahara05:12
*** Poornima_K has joined #openstack-sahara05:17
*** esikachev has quit IRC05:17
*** Poornima has quit IRC05:18
*** Poornima_K has quit IRC05:40
*** Poornima has joined #openstack-sahara05:40
*** ukaynar has quit IRC05:41
*** ukaynar has joined #openstack-sahara05:42
*** ukaynar has quit IRC05:46
*** esikachev has joined #openstack-sahara06:13
*** tesseract has joined #openstack-sahara06:16
*** esikachev has quit IRC06:17
*** rcernin has joined #openstack-sahara06:22
*** anshul has joined #openstack-sahara06:34
*** Poornima has quit IRC06:37
*** Poornima has joined #openstack-sahara06:52
*** pgadiya has joined #openstack-sahara07:03
*** Poornima has quit IRC07:10
*** esikachev has joined #openstack-sahara07:14
*** esikachev has quit IRC07:18
*** Poornima has joined #openstack-sahara07:52
*** openstackgerrit has joined #openstack-sahara07:53
openstackgerritOpenStack Proposal Bot proposed openstack/sahara-dashboard master: Imported Translations from Zanata  https://review.openstack.org/49284207:53
*** openstackgerrit has quit IRC08:02
*** esikachev has joined #openstack-sahara08:14
*** esikachev has quit IRC08:19
*** gokhan_ has joined #openstack-sahara08:42
zhulihi folks, I happen to this bug ('Clusters can stay in Deleting state forever' https://bugs.launchpad.net/sahara/+bug/1647411) many times, wonder if we have plan to do something with this.08:44
openstackLaunchpad bug 1647411 in Sahara "Clusters can stay in Deleting state forever" [Medium,Triaged]08:44
*** iwonka has joined #openstack-sahara08:54
*** openstackgerrit has joined #openstack-sahara09:01
openstackgerritOpenStack Proposal Bot proposed openstack/sahara master: Imported Translations from Zanata  https://review.openstack.org/49291209:01
*** esikachev has joined #openstack-sahara09:15
*** esikachev has quit IRC09:20
*** esikachev has joined #openstack-sahara09:38
*** pgadiya has quit IRC10:01
*** pgadiya has joined #openstack-sahara10:18
*** esikachev has quit IRC10:25
*** esikachev has joined #openstack-sahara10:26
*** tellesnobrega has quit IRC10:29
*** tellesnobrega has joined #openstack-sahara11:14
*** esikachev has quit IRC11:15
*** esikachev has joined #openstack-sahara11:28
*** elmiko has quit IRC11:38
*** elmiko has joined #openstack-sahara11:41
*** shuyingy_ has joined #openstack-sahara11:46
*** shuyingya has quit IRC11:46
tellesnobregashuyingy_, very sorry I haven't been able to test the CDH stuff11:56
tellesnobregaI'm busy with some downstream stuff along side stable/pike upstream11:56
tellesnobregaI will probably have some time next week11:56
shuyingy_Hi tellesnobrega, don't worry. Looks like the reason I get that error because I have choosed a volume size less than 10 GB in CDH plugin. CDH centos image seems to reserve 10GB for system.12:01
tellesnobregahmm12:01
tellesnobregainteresting12:01
tellesnobregaif you confirm that it would be good to add to documentation12:02
shuyingy_Yeah, that maybe the root cause12:02
shuyingy_yes. I can paste an picture to show you the reason.12:03
tellesnobregathanks12:04
shuyingy_http://imgur.com/a/SFYuA12:08
shuyingy_tellesnobrega, I have created a cluster, slave node has attached a volume 13GB, but the capcacity for HDFS only have 3GB. There must be some reason the system reserve 10GB I think. I use CDH centos image12:10
tellesnobregatrue, I will try to investigate that12:11
tellesnobregabtw, you are using cdh 5.4.0 does this also happen with newer versions?12:11
shuyingy_I am busy in preparing a PPT for a presentation on Sunday. sorry for the late update the status of that bug.12:12
tellesnobregano worries12:12
shuyingy_I have test it on CDH 5.7.0. it reproduced12:12
tellesnobregaok, when you have can you report that on launchpad?12:13
shuyingy_sure12:13
shuyingy_My product has released on Wed. and I have recruited two people to help me improve this component. I think I can put more time on upstream now.12:14
shuyingy_:_12:14
shuyingy_:)12:14
*** pgadiya has quit IRC12:16
*** zhuli has quit IRC12:19
*** zhuli has joined #openstack-sahara12:19
*** shuyingy_ has quit IRC12:21
*** dave-mccowan has joined #openstack-sahara12:25
*** esikachev has quit IRC12:36
*** esikachev has joined #openstack-sahara12:37
*** esikachev has quit IRC12:42
*** shuyingya has joined #openstack-sahara13:02
*** shuyingya has quit IRC13:07
*** Poornima has quit IRC13:07
*** cpusmith has joined #openstack-sahara13:34
*** cpusmith_ has joined #openstack-sahara13:36
*** cpusmith has quit IRC13:40
*** tellesnobrega has quit IRC13:43
*** tellesnobrega has joined #openstack-sahara13:43
*** jeremyfreudberg has joined #openstack-sahara13:47
jeremyfreudbergzhuli, regarding "deleting state forever bug", we are going to revisit the issue at PTG next month. we think we might be able to integrate heat "stack abandon" feature, etc to accomplish force delete of cluster13:49
*** lucasxu has joined #openstack-sahara13:54
tellesnobregajeremyfreudberg, good news from shuyingya, did you see in the logs?13:58
tellesnobregagood and bad, but I'm focusing on the good one13:59
tellesnobregahe may get some more time upstream this next cycle13:59
jeremyfreudbergtellesnobrega, yes i did. more time upstream is awesome. and the "bad news" about volume is actually pretty good, at least we have some understanding of the bug now13:59
tellesnobregayeah14:00
*** esikachev has joined #openstack-sahara14:00
*** tnovacik has joined #openstack-sahara14:05
jeremyfreudbergtellesnobrega, i saw sahara RC1 is tagged. i'm trying to think if there is anything critical left to do.... I think nothing, right?14:08
tellesnobreganothing critical14:09
tellesnobregathe big issues were the trust one, documentation and sahara-dashboard14:09
tellesnobregathose are fixed14:09
tellesnobregaI believe we are good to go for a final release14:09
tellesnobregajust maybe check for open bugs14:10
tellesnobregasmall fixes14:10
jeremyfreudbergchecking over the launchpad bugs, i think we are good for this release14:13
tellesnobregacool14:14
*** tnovacik has quit IRC14:15
cpusmith_jeremyfreudberg We have a lot of Heat stacks stuck in delete in progress or failed and the delete or abandon command won't get rid of them. The resources are gone so is the only method left to "MariaDB [heat]> update resource set status='COMPLETE' where uuid='xxxxxxxxxxxxxxxxxxxx';"14:16
jeremyfreudbergcpusmith_, abandon should work. that's what it's designed to do. make sure you have enable_stack_abandon=True in heat conf then restart heat-engine and try.14:17
cpusmith_We do14:17
jeremyfreudbergcpusmith_, hmm, in my experience heat stack abandon feature is actually dropping the stack from the database, i'm surprised that doesn't work14:18
cpusmith_OK, that works on the failed ones but not the ones stuck in progress14:18
jeremyfreudbergcpusmith_, oh, i see14:19
jeremyfreudbergI believe due to timeout DELETE_IN_PROGRESS should eventually become DELETE_FAILED, but i have to check. still, there should be some better way14:19
*** esikachev has quit IRC14:21
*** ukaynar has joined #openstack-sahara14:23
*** esikachev has joined #openstack-sahara14:37
*** anshul has quit IRC14:49
*** rcernin has quit IRC14:57
*** shuyingya has joined #openstack-sahara14:59
*** esikachev has quit IRC15:02
*** shuyingya has quit IRC15:03
*** ukaynar has quit IRC15:03
*** ukaynar has joined #openstack-sahara15:04
*** ukaynar has quit IRC15:08
*** anshul has joined #openstack-sahara15:20
*** tellesnobrega has quit IRC15:21
*** anshul has quit IRC15:27
*** shuyingya has joined #openstack-sahara15:31
*** jeremyfreudberg has quit IRC15:54
*** ukaynar has joined #openstack-sahara15:55
*** lucasxu has quit IRC16:07
zhulijeremyfreudberg: thanks for the info16:10
*** ukaynar_ has joined #openstack-sahara16:34
*** ukaynar has quit IRC16:37
*** tesseract has quit IRC16:41
*** shuyingya has quit IRC17:00
*** anshul has joined #openstack-sahara17:09
*** shuyingya has joined #openstack-sahara17:10
*** shuyingya has quit IRC17:14
*** tnovacik has joined #openstack-sahara17:19
*** anshul has quit IRC17:25
*** esikachev has joined #openstack-sahara17:42
*** tmckay has joined #openstack-sahara17:45
tomtomtomHey I'm using the following template to attempt to resize my node group: {     "resize_node_groups": [         {             "count": 2,             "name": "sparkslave"         }     ] }  But i get an error that resize_node_groups isn't a valid option.  Is there a reference for building the json scaling files anywhere?  It's been hard to find online.17:56
*** shuyingya has joined #openstack-sahara17:59
openstackgerritMerged openstack/sahara-image-elements stable/pike: Update .gitreview for stable/pike  https://review.openstack.org/49273418:04
*** shuyingya has quit IRC18:04
openstackgerritMerged openstack/sahara-dashboard stable/pike: Update .gitreview for stable/pike  https://review.openstack.org/49272918:05
openstackgerritMerged openstack/sahara-extra stable/pike: Update .gitreview for stable/pike  https://review.openstack.org/49273218:05
openstackgerritMerged openstack/sahara-extra stable/pike: Update UPPER_CONSTRAINTS_FILE for stable/pike  https://review.openstack.org/49273318:05
openstackgerritMerged openstack/sahara-dashboard master: Update reno for stable/pike  https://review.openstack.org/49273118:10
openstackgerritMerged openstack/sahara-dashboard stable/pike: Update UPPER_CONSTRAINTS_FILE for stable/pike  https://review.openstack.org/49273018:10
openstackgerritMerged openstack/sahara-dashboard master: Imported Translations from Zanata  https://review.openstack.org/49284218:10
*** tellesnobrega has joined #openstack-sahara18:10
*** ltosky[m] has quit IRC18:23
*** jeremyfreudberg has joined #openstack-sahara18:29
jeremyfreudbergtomtomtom, i'm taking a look at your question now18:29
tomtomtomok thanks18:29
jeremyfreudbergi'm assuming that you've already tried https://developer.openstack.org/api-ref/data-processing/#scale-cluster , so let me see what actually is happening in a real deplyoment18:30
*** ltosky[m] has joined #openstack-sahara18:30
tomtomtomwell I removed add_node_groups because the node group is already there, is that correct?18:32
*** tellesnobrega has quit IRC18:32
jeremyfreudbergtomtomtom, yes18:33
tomtomtomdid that and got the following error: http://imgur.com/a/edkhQ18:35
tomtomtomcommand was: ./sahara cluster-scale --name clusterin-1 --id d5324bb0-7bcc-46a0-b0f2-060b0bb67231 --json /root/cluster-scale.json18:36
jeremyfreudbergtomtomtom, taking a look18:36
*** tellesnobrega has joined #openstack-sahara18:36
jeremyfreudbergtomtomtom, wonder if `openstack dataprocessing cluster scale` would have the same issue18:40
tomtomtomcan try it18:40
jeremyfreudbergin fact, we've removed the old sahara cli, so entirely possible it has some bugs that we've forgotten18:40
jeremyfreudbergtomtomtom, for what it's worth, i do see in the code of the old-style cli a problem https://github.com/openstack/python-saharaclient/blob/stable/newton/saharaclient/api/shell.py#L352 i'd imagine the ** the cause of the problem18:44
tomtomtomok yeah that command started the scaling process thanks,18:44
*** shuyingya has joined #openstack-sahara18:49
*** shuyingya has quit IRC18:53
openstackgerritMerged openstack/sahara stable/pike: Update .gitreview for stable/pike  https://review.openstack.org/49272619:11
openstackgerritMerged openstack/sahara master: Update reno for stable/pike  https://review.openstack.org/49272819:11
openstackgerritMerged openstack/sahara stable/pike: Update UPPER_CONSTRAINTS_FILE for stable/pike  https://review.openstack.org/49272719:11
tomtomtomjeremyfreudberg so is this an update to sahara or horizon saharaclient that needs to take place?19:42
tomtomtomis there an schedule to update that code? the openstack command works I'm just curious at this point.19:43
jeremyfreudbergtomtomtom, the `sahara cluster-scale` thing is already removed from latest upstream. i'm stil not sure the root cause of your horizon scaling issues19:44
tomtomtomok, so it's my "newton" version of the sahara code that is causing the issue.19:45
tomtomtom?19:45
tellesnobregatomtomtom, newton release already used openstack unified cli irrc19:45
jeremyfreudbergtomtomtom, in terms of cli yes. in terms of horizon - again, not sure19:45
tomtomtomok19:45
jeremyfreudbergtellesnobrega, newton we still had both, i believe19:45
tellesnobregajeremyfreudberg, both yes, but it defaults to use openstack?19:46
tellesnobregaor sahara client19:46
jeremyfreudbergtellesnobrega, there is 3 things sahara client library, the new cli, and old cli. both the new and old clis call sahara client library. it's user's choice (newton and before) for which cli they want to use. in the case of horizon (tomtomtom's previous issue), it's just client library directly19:47
jeremyfreudbergjust so everyone can be clear19:48
tellesnobregajeremyfreudberg, hm, I see19:48
tellesnobregaweird that it is failing on the python client19:49
cpusmith_jeremyfreudberg, can the scaling be fixed for Newton?19:52
jeremyfreudbergcpusmith_, the scaling does work in the new-style CLI (which is present in newton). again, i still have yet to diagnose the cause of the failed scaling through UI19:53
tellesnobregacpusmith_, we can sure work on a backport once we identify the real issue19:53
tellesnobregabackport = fix19:53
tellesnobregacpusmith_, can you create a bug with all the info you have about that on launchpad? just so we can track it19:54
tellesnobregaand prioritize it19:54
cpusmith_OK, tomtomtom will19:54
tellesnobregacpusmith_, thanks19:54
tellesnobregatomtomtom, you probably know it, but just to make it easier19:55
tellesnobregahttps://bugs.launchpad.net/sahara19:55
cpusmith_We've got everything else working it seems.  Last ssh issue was the mtu.  Forced it to 1500 on the image instead of the default 900019:55
tellesnobregacpusmith_, that is good to hear19:56
iwonkatellesnobrega: i kind of figured out what's wrong19:56
jeremyfreudbergcpusmith_, can you explain how you change the MTU "on the image"?19:56
tellesnobregaiwonka, that is good to hear19:57
jeremyfreudbergiwonka: you mean with your dashboard patch? good to hear19:57
iwonkayes, that's what i mean19:57
iwonkabut form approach has one issue19:57
*** shuyingya has joined #openstack-sahara19:57
iwonkawhen i need things like image id, they depend on plugin19:58
iwonkaso i cannot create a drop-down menu with dem in the form19:58
*** esikachev has quit IRC19:58
iwonkabecause it's before file is uploaded19:58
jeremyfreudbergiwonka, hmm, I see what you mean19:58
*** esikachev has joined #openstack-sahara19:59
cpusmith_We have 9000 set on the networking which gets pushed to the VM on creation. I launched a VM with the Spark image and added "post-up /sbin/ifconfig eth0 mtu 1500" to the /etc/network/interfaces.d/eth0.cfg file, created snapshot, created volume from snapshot, uploaded volume to image, register in Sahara20:00
cpusmith_4:45 for a 2 node cluster to go green20:00
tellesnobregacpusmith_, nice workaround.20:00
*** ltosky[m] has quit IRC20:01
*** shuyingya has quit IRC20:02
jeremyfreudbergcpusmith_, nice.20:02
jeremyfreudbergiwonka, i might have a suggestion for you, actually20:02
cpusmith_yea, some container or part has a 1500 mtu on our OS and first prevented the meta-data from being read so set to 7340, then had to go to 1500 for SSH20:03
*** esikachev has quit IRC20:04
jeremyfreudbergiwonka: if you can look at  orchestration->stacks->launch stack , they actually have a multiple-page form with upload button20:04
jeremyfreudbergyou can see how they implemente that i guess20:04
jeremyfreudbergexcept in ours, first page would be plugin/version choice, second page is file upload20:04
jeremyfreudbergiwonka, to see it, it's in the horizon repo, i mean20:05
*** ltosky[m] has joined #openstack-sahara20:05
iwonkajeremyfreudberg: i get two file uploads at the first page there20:05
*** esikachev has joined #openstack-sahara20:06
jeremyfreudbergiwonka, yes, but i hope it's simple enough to reduce it to 1 :)20:06
iwonkayes, it is :)20:06
tellesnobregahm, that makes sense, lots of our sahara ui does that, first page asking for plugin/version20:06
jeremyfreudbergswitching the order of the pages there might be tricky, but i figure it's something to help20:06
jeremyfreudbergtellesnobrega, yep20:07
iwonkathey parse it as soon as possible20:07
iwonkathat sounds good actually20:07
iwonkathanks a lot jeremyfreudberg20:08
jeremyfreudbergiwonka, np. good luck20:08
*** esikachev has quit IRC20:10
cpusmith_Any idea why I'm getting "too few arguements on this? openstack dataprocessing cluster scale --instances sparkslave-2:3 Fatman-420:15
jeremyfreudbergfatman4 needs a ":count" after it, no20:16
jeremyfreudberg?20:16
jeremyfreudbergoops, i can't read20:17
cpusmith_That's the name of the cluster20:17
jeremyfreudbergcpusmith_, yep, just realized20:17
jeremyfreudbergso, uh, i think tomtomtom already did it successful, no?20:18
cpusmith_with a json file but that may be too difficult for tier 2 support20:18
jeremyfreudbergoh i see20:19
jeremyfreudberghmm20:19
jeremyfreudberglet me try on my own env20:19
cpusmith_So what does this mean20:19
cpusmith_--instances <node-group-template:instances_count> [<node-group-template:instances_count> ...]20:19
cpusmith_Why double?20:19
jeremyfreudbergi assume that i means you can scale just one node group, or a bunch of different node groups20:19
jeremyfreudbergsome topologies might have one worker type that does this, one worker type that does that20:20
tellesnobregajeremyfreudberg is correct on that20:20
jeremyfreudberganyway, cpusmith_ i also get "too few arguments" trying to use the cli in whatever way my brain assumed was the intuitive way20:20
jeremyfreudbergso let me dive into the code20:20
jeremyfreudbergand see what the "real way" is20:21
cpusmith_Got it20:22
cpusmith_needs --wait? --> openstack dataprocessing cluster scale --instances sparkslave-2:3 --wait Fatman-420:22
cpusmith_Cluster "Fatman-4" scaling has been started.20:23
jeremyfreudbergcpusmith_, hmm, it should not need that, but let me check20:23
cpusmith_It's working....adding 2 more20:24
cpusmith_The dev test code has --wait in it too20:25
jeremyfreudbergcpusmith_, i am still trying to figure out why --wait was needed for the arguments to parse correctly, but in the mean time, what happens if you reverse the arguments `openstack dataprocessing cluster scale clustername --instances worker:2`20:29
jeremyfreudbergwith that, instead of not enough args, i get something about no match20:29
jeremyfreudbergwould be interesting if you get the same20:29
*** tnovacik has quit IRC20:32
cpusmith_Nope, it worked20:33
cpusmith_Cluster "Fatman-4" scaling has been started.20:33
jeremyfreudbergcpusmith_, and that's without `--wait`? if so, nice!20:33
jeremyfreudberg(and strange for me)20:33
cpusmith_no, with --wait. the CLI waits of course but without I get too few arguements20:34
jeremyfreudbergcpusmith_, too few arguments, even with the arguments flipped like i suggested?20:34
cpusmith_no, one sec20:34
cpusmith_Yep, that works.  The --help is incorrect20:36
cpusmith_Cluster "Fatman-4" scaling has been started.20:36
cpusmith_Cli returns immediately20:37
jeremyfreudbergcpusmith_, so just so i am absolutely clear `openstack dataprocessing cluster scale clustername --instances ng:count` with no --wait worked for you?20:37
cpusmith_Yes it did20:44
jeremyfreudbergcpusmith_, good. (and also bad... i guess we really have to get better doc / fix help text. thanks for reporting.)20:45
cpusmith_Thanks for helping20:46
jeremyfreudberghttps://bugs.launchpad.net/sahara/+bug/1710302 just to keep track21:00
openstackLaunchpad bug 1710302 in Sahara "[CLI] Bad help text" [Medium,New]21:00
tomtomtomI also opened a bug for the backport mentioned above: https://bugs.launchpad.net/sahara/+bug/171030421:08
openstackLaunchpad bug 1710304 in Sahara "Sahara UI not able to scale due to old client being used" [Undecided,New]21:08
*** shuyingya has joined #openstack-sahara21:10
*** dave-mccowan has quit IRC21:12
*** shuyingya has quit IRC21:14
*** ukaynar_ has quit IRC21:31
*** ukaynar has joined #openstack-sahara21:32
*** ukaynar has quit IRC21:36
*** tmckay has quit IRC21:40
*** shuyingya has joined #openstack-sahara21:59
*** cpusmith_ has quit IRC22:03
*** shuyingya has quit IRC22:04
*** jeremyfreudberg has quit IRC22:10
*** shuyingya has joined #openstack-sahara22:57
*** ssmith has joined #openstack-sahara22:58
*** shuyingya has quit IRC23:02
*** shuyingya has joined #openstack-sahara23:37
*** jeremyfreudberg has joined #openstack-sahara23:39
*** shuyingya has quit IRC23:42
*** jeremyfreudberg has quit IRC23:53
*** jeremyfreudberg has joined #openstack-sahara23:56

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!