Tuesday, 2019-09-17

*** markvoelker has quit IRC00:02
*** whoami-rajat has joined #openstack-cinder00:27
*** lseki has quit IRC00:37
*** masayukig has joined #openstack-cinder00:46
*** whfnst has joined #openstack-cinder01:10
*** zhengMa has joined #openstack-cinder01:59
*** markvoelker has joined #openstack-cinder02:03
*** markvoelker has quit IRC02:08
zhengMarosmaita: smcginnis: We have updated the patch - Leverage hw accelerator in image compression  #Link: https://review.opendev.org/668825. Expecting reviewing! Thanks a lot!:)02:09
*** mmethot_ has quit IRC02:11
*** mmethot_ has joined #openstack-cinder02:12
*** swegener has quit IRC02:13
rosmaitazhengMa: ty, will take a look02:15
rosmaitaKeithMnemonic: geguileo is usually in this channel, he is in UTC+2 time zone02:17
zhengMarosmaita: thanks!!02:19
*** whoami-rajat has quit IRC02:29
*** rajinir has quit IRC02:50
*** rajinir has joined #openstack-cinder02:56
*** dave-mccowan has quit IRC03:00
*** baojg has quit IRC03:15
*** rajinir has quit IRC03:18
*** psachin has joined #openstack-cinder03:21
*** baojg has joined #openstack-cinder03:23
*** rajinir has joined #openstack-cinder03:24
*** Reepicheep has quit IRC03:35
*** Reepicheep has joined #openstack-cinder03:35
*** rmcallis has quit IRC03:59
*** baojg has quit IRC04:27
*** ociuhandu has joined #openstack-cinder04:31
*** ociuhandu has quit IRC04:35
*** baojg has joined #openstack-cinder04:54
*** udesale has joined #openstack-cinder05:01
*** pcaruana has joined #openstack-cinder05:16
*** Luzi has joined #openstack-cinder05:16
*** pcaruana has quit IRC05:29
*** boxiang has joined #openstack-cinder05:47
*** zhubx has quit IRC05:47
*** openstackgerrit has joined #openstack-cinder05:53
openstackgerritZhengMa proposed openstack/cinder master: Leverage hw accelerator in image compression  https://review.opendev.org/66882505:53
*** n-saito has quit IRC05:55
*** spsurya has joined #openstack-cinder05:55
*** sapd1_x has joined #openstack-cinder06:19
*** markvoelker has joined #openstack-cinder06:47
*** swegener has joined #openstack-cinder06:52
*** markvoelker has quit IRC06:52
*** trident has quit IRC07:22
*** tosky has joined #openstack-cinder07:27
*** sahid has joined #openstack-cinder07:29
*** brinzhang_ has joined #openstack-cinder07:29
*** trident has joined #openstack-cinder07:31
*** brinzhang has quit IRC07:32
*** brinzhang_ has quit IRC07:33
*** trident has quit IRC07:36
*** trident has joined #openstack-cinder07:46
*** rcernin has quit IRC07:47
*** brinzhang has joined #openstack-cinder07:49
*** tkajinam has quit IRC08:03
*** lpetrut has joined #openstack-cinder08:07
*** lpetrut has quit IRC08:08
*** e0ne has joined #openstack-cinder08:41
*** lpetrut has joined #openstack-cinder08:48
*** zhengMa has left #openstack-cinder08:50
*** sapd1_x has quit IRC08:53
*** pcaruana has joined #openstack-cinder09:19
*** uberjay has quit IRC09:28
*** uberjay has joined #openstack-cinder09:28
*** boxiang has quit IRC09:39
*** boxiang has joined #openstack-cinder09:40
*** ociuhandu has joined #openstack-cinder10:05
*** pcaruana has quit IRC10:07
*** Luzi has quit IRC10:13
*** sapd1_x has joined #openstack-cinder10:25
*** Luzi has joined #openstack-cinder10:29
*** udesale has quit IRC10:34
*** udesale has joined #openstack-cinder10:34
*** sahid has quit IRC10:36
*** Sneha has joined #openstack-cinder10:42
*** Sneha has quit IRC10:47
*** carloss has joined #openstack-cinder10:53
*** pcaruana has joined #openstack-cinder10:55
*** zhubx has joined #openstack-cinder10:58
*** boxiang has quit IRC11:00
*** boxiang has joined #openstack-cinder11:15
*** sapd1_x has quit IRC11:15
*** zhubx has quit IRC11:18
*** sahid has joined #openstack-cinder11:29
*** takamatsu has joined #openstack-cinder11:52
*** raghavendrat has joined #openstack-cinder11:54
*** markvoelker has joined #openstack-cinder12:11
*** enriquetaso has joined #openstack-cinder12:19
*** dave-mccowan has joined #openstack-cinder12:25
*** pcaruana has quit IRC12:27
*** openstackstatus has quit IRC12:28
*** openstack has joined #openstack-cinder12:33
*** ChanServ sets mode: +o openstack12:33
*** jmlowe has joined #openstack-cinder12:33
KeithMnemonicgeguileo are you around?12:57
geguileoKeithMnemonic: yes, about to go have lunch, but around  :-)12:58
*** pcaruana has joined #openstack-cinder12:59
geguileoKeithMnemonic: I read you made progress on the Lenovo/os-brick issue12:59
KeithMnemonicthe issue i am seeing with a Lenovo DS6200 is in this code,  https://github.com/openstack/os-brick/blob/stable/pike/os_brick/initiator/connectors/iscsi.py#L799-L810  what happens is the migrate 6 instances in batch, the first 3 run ok, but then all of a sudden no disks are added to the dict for removal13:00
geguileoKeithMnemonic: where you able to check what was actually present on sysfs?13:01
geguileoKeithMnemonic: under /sys/class/scsi_host/....13:02
KeithMnemonicso for exmaple, the last working one has " u'iqn.2002-09.com.lenovo:01.array.00c0ff3b2da0' ): (set([u'sdh']), set([ u'sdj',u'sdb', u'sdy' ]  ) "13:02
KeithMnemonici did not get back on the customer system yet, waiting on them13:02
KeithMnemonicand then after the last good one, i see13:02
KeithMnemonic" u'iqn.2002-09.com.lenovo:01.array.00c0ff3b2da0' ): (set([]), set([ u'sdj',u'sdb', u'sdy' ]  ) "13:03
KeithMnemonicso instead of it picking sdj, sdb, or sdy , it is null13:03
geguileoKeithMnemonic: thatś because the LUN doesn't match13:04
KeithMnemonicok so check sysfs to see the lun mapping?13:04
KeithMnemonicsince these instances boot and run fine, other thna migration13:04
KeithMnemonicwhat would mess up the mapping13:05
KeithMnemonicand i did a similar test on a HPE 3par i have and it is fine13:05
KeithMnemonici do not have a DS620013:05
*** mriedem has joined #openstack-cinder13:05
geguileoI can only think of the 3par backend changing the LUNs on its own13:06
geguileoafter the first 3 LUNs have been removed13:06
geguileobut afaik that would break everything...13:06
geguileoso I doubt that's happening13:07
KeithMnemonicyou mean the 6200?13:07
KeithMnemonicthe HPE 3par , migrates 6 as expected13:08
KeithMnemonicthe Lenovo DS6200 does not13:08
KeithMnemonicit only gets the first 313:08
geguileosorry, I meant the 6200, not the 3par13:08
KeithMnemonicso i am also waiting for them to test doing these not in 1 batch but perhaps seperated by a 60 second wait or so13:09
KeithMnemonicto see if there is some concurrency13:09
KeithMnemonici see something odd in the trace as well13:09
geguileoI would look into the state when the error happens13:09
KeithMnemonicso before the migrate, i see path_checker_state up 24, paths 24,13:10
KeithMnemonicafter the first one13:10
KeithMnemonicdown 4, up 20 paths 2413:10
KeithMnemonicor maybe during the first instance to migrate13:10
KeithMnemonicso this seems ok13:10
geguileodoesn't sound right13:10
geguileono path should be down if things are properly configured13:11
KeithMnemonicand it continues this 4x multiple until down 4 up 12, paths 1613:11
geguileoit means that multipathing is probably not configured in OpenStack13:11
KeithMnemonicand this conicides with instance 3 migrationg correctly13:11
geguileobut if they are migrated correctly, why are the paths appearing as down?13:11
KeithMnemonicmpath is configured i see it in the config file and the virsh does a dm-X device13:11
KeithMnemonicthat is before they get removed13:12
geguileothen os-brick would clean the devices correctly13:12
geguileofirst flushing the multipath and then removing the devices13:12
KeithMnemonici see the multipath -f13:12
KeithMnemonicin the trace13:12
*** raghavendrat has left #openstack-cinder13:12
geguileois there a trace when doing the multipath -f?13:13
KeithMnemonicbut than are that down 4 up 12, paths 16 , i see down 5 up 11, paths 1213:13
KeithMnemonicso now it is not even adding up13:13
KeithMnemonic5+11 != 1213:13
geguileopaths should all be up13:13
geguileoif they are properly removed13:13
geguileoif you have the logs you should confirm that the first devices that where removed are the ones that should have been removed13:14
geguileoyou check when the volume attaches the LUN and the individual devices that form the volume13:15
geguileothen when the detach is requested you confirm that os-brick is disconnecting the right device for the right LUN13:15
geguileoit should be OK, but just to make sure that os-brick and Nova are not doing anything crazy13:16
geguileoKeithMnemonic: I'm starving, going to have lunch, will be back in 30-40 minutes13:18
KeithMnemonicok np13:18
KeithMnemonicping you after you come back13:18
*** eharney has joined #openstack-cinder13:24
*** pcaruana has quit IRC13:28
*** Luzi has quit IRC13:38
*** lseki has joined #openstack-cinder13:41
*** psachin has quit IRC13:43
geguileoKeithMnemonic: I'm back now13:57
KeithMnemonicok so here is what I did, i looked at virsh and got the dm-X for each of the 6 instances, then looked at multipath -ll to line up the disks13:58
KeithMnemonicand i can see if follows in the correct order13:58
KeithMnemonicuntil it goes astray13:58
geguileoIt should also be possible to follow it in the logs13:59
KeithMnemonici did13:59
geguileoAnd you can also look info sysfs13:59
*** pcaruana has joined #openstack-cinder13:59
KeithMnemonicand as i said for the first 3 all in working as expected14:00
geguileobut yeah, with that one should be able to see what happened when the detach failed14:00
geguileosee the LUN that was trying to detach and confirm what's in the stem14:00
KeithMnemonicit never even does the multipath -f for the failed ones14:00
KeithMnemonicso should _get_connection_devices  return the device it need to disconnect/detach14:02
KeithMnemonicwhere does _get_connection_devices occur in the workflow?14:03
*** whoami-rajat has joined #openstack-cinder14:12
*** kukacz has joined #openstack-cinder14:14
kukaczhi, anybody familiar with solidfire driver around? I'm troubleshooting disabled image caching after upgrading cinder from Ocata to Queens14:17
kukaczseeing the sf_allow_template_caching parameter is now disabled by default, I wonder how to achieve similar behaviour of fast volume from image provisioning14:18
geguileoKeithMnemonic: it's the first step14:19
geguileoKeithMnemonic: before os-brick can do any detach it gathers the mapping of (target, portal) to volumes14:19
geguileoKeithMnemonic: which is the purpose of the _get_connection_devices method14:20
geguileoKeithMnemonic: And the volumes that are mapped are split in 214:22
geguileoKeithMnemonic: those that belong to this connection_properties, and those that don't14:22
geguileothen we group all the volumes that belong to this connection so we can disconnect them14:23
KeithMnemonicso that is where it lies, i am looking at sysfs now to line  up paths/disk wit dm devices, give me a few to finish that14:24
geguileoKeithMnemonic: OK, here's is the disconnection code https://github.com/openstack/os-brick/blob/stable/pike/os_brick/initiator/connectors/iscsi.py#L864-L89714:24
KeithMnemonicso i verifed all of the paths are present before the migration starts14:29
KeithMnemonicand the luns line up14:29
KeithMnemonic3600c0ff0003b7d6c79615d5d01000000 dm-7 Lenovo,DS620014:29
KeithMnemonicClass Device path = "/sys/devices/platform/host5/session1/target5:0:0/5:0:0:2/block/sdh"14:29
KeithMnemonicClass Device path = "/sys/devices/platform/host6/session2/target6:0:0/6:0:0:2/block/sdg"14:29
KeithMnemonicClass Device path = "/sys/devices/platform/host8/session4/target8:0:0/8:0:0:2/block/sdi"14:29
KeithMnemonicClass Device path = "/sys/devices/platform/host7/session3/target7:0:0/7:0:0:2/block/sdf"14:29
KeithMnemonic3600c0ff0003b7d6c7a615d5d01000000 dm-10 Lenovo,DS620014:29
KeithMnemonicClass Device path = "/sys/devices/platform/host5/session1/target5:0:0/5:0:0:3/block/sdj"14:29
KeithMnemonicClass Device path = "/sys/devices/platform/host6/session2/target6:0:0/6:0:0:3/block/sdm"14:29
KeithMnemonicClass Device path = "/sys/devices/platform/host7/session3/target7:0:0/7:0:0:3/block/sdl"14:29
KeithMnemonicClass Device path = "/sys/devices/platform/host8/session4/target8:0:0/8:0:0:3/block/sdk"14:29
KeithMnemonicso for exmaple dm-7 migrates correctly14:29
smcginnispaste is your (well, our) friend.14:30
KeithMnemonicdm-10 is one of the ones where no multipath -f is ever run by brick and _get_connection_devices returns empty for the "belongs"14:30
geguileosmcginnis: so true  :-)14:30
KeithMnemonicyeag let me do that osrry14:30
geguileoKeithMnemonic: do you have the connection_information that is passed to os-brick's disconnect for dm-10?14:31
KeithMnemoniclet me get that in the next paste14:31
KeithMnemonicdid you want that output when it starts the migrate for the instance attached to dm-10?14:32
geguileoKeithMnemonic: I'd like to see the call to os-brick for the disconnect of the volume14:33
geguileoKeithMnemonic: where one can see the connection information14:33
geguileoKeithMnemonic: it shows the dictionary with the tartget_iqn, target_portal, target_iqns, target_portals, etc14:33
geguileoKeithMnemonic: in that dict we can see the target_lun14:34
geguileoKeithMnemonic: that doesn't look like the Nova call to os-brick's disconnect_volume14:39
KeithMnemonicif you give me a keyword to search on14:40
geguileothat second one looks right14:41
geguileoKeithMnemonic: does that data dictionary have target_iqns or just target_iqn?14:42
geguileo(the pastebin doesn't have the complete "data" dictionary" so I don't know)14:42
KeithMnemonictarget_iqn,  http://paste.openstack.org/show/777166/14:43
KeithMnemonicso let me check a working one14:44
KeithMnemonicworking one also has target_iqn14:44
*** FlorianFa has quit IRC14:44
geguileoKeithMnemonic: something looks weird...14:44
KeithMnemonicalso what is weird is on the 3par, i never see it do this iscsiadm -m discoverydb -o show -P 1 but i do on the lenovo14:45
KeithMnemonicso it is like the method to get stuff form the backend is different14:45
KeithMnemonicor the backends just respond differentlky14:46
geguileoKeithMnemonic: because the 3PAR driver returns all the target IQNs and portals14:46
geguileoKeithMnemonic: whereas the Lenovo relies on discovery14:46
KeithMnemonicso maybe something is not discovered properly?14:46
KeithMnemonicafter some time.14:46
KeithMnemonici wonder if this is some concurency issue14:46
geguileolet me have a look at the discovery code14:47
KeithMnemonici.e it gets buys migrating 1-3, so then discovery on 4-6 does not work correctly14:47
KeithMnemonicBe back in a few, thanks again geguileo for helping me dig into this14:48
geguileoKeithMnemonic: OK, when you get back I'd like to see the log entry on the request of the failed disconnect that says "Getting connected devices for (ips,iqns,luns)='14:54
*** pcaruana has quit IRC14:55
*** raghavendrat has joined #openstack-cinder14:58
raghavendrathi e0ne: eharney: hemna_: jungleboyj: smcginnis: whoami-rajat: this is regarding https://review.opendev.org/#/c/677945/15:01
raghavendratZuul and HPE Storage CI [python 3.7] have passed.15:02
*** udesale has quit IRC15:08
*** lpetrut has quit IRC15:12
*** sapd1 has quit IRC15:14
*** sapd1 has joined #openstack-cinder15:15
KeithMnemonicok back let me grab it and put it  into paste15:25
*** boxiang has quit IRC15:27
*** boxiang has joined #openstack-cinder15:28
KeithMnemonicgeguile so that is interesting, i see 6 entries, which should correspond to the 6 migrations the luns in those are 7,5,4,2,2,2  but those are not the luns listed under /sys15:30
KeithMnemonic7 is not even listed here http://paste.openstack.org/show/777163/15:30
geguileoKeithMnemonic: now that's weird...15:31
KeithMnemonicso that would make sense why we saw the down in the path15:31
KeithMnemonicif 7 is not listed and it pulled that one first15:31
geguileoKeithMnemonic: could you check your system logs to see if there's any SCSI message about LUNs?15:32
*** enriquetaso has quit IRC15:32
KeithMnemoniclike remapping?15:32
KeithMnemoniclike this " Warning! Received an indication that the LUN assignments on this target h15:33
KeithMnemonicave changed. The Linux SCSI layer does not automatical" ;-)15:33
geguileoKeithMnemonic: yes15:33
KeithMnemonicthat occurs before the migrations start15:34
KeithMnemonicok so that is why this all goes fubar15:34
geguileoKeithMnemonic: then that's a problem with the 6200 system15:34
KeithMnemonicquestion is why is the kernel detecting a change15:34
*** CeeMac has joined #openstack-cinder15:35
KeithMnemonicjungleboyj: Paging a 6200 person ;-)15:35
geguileoKeithMnemonic: because there's probably an AEN/AER message sent by the 620015:36
*** carloss has quit IRC15:40
*** e0ne has quit IRC15:48
*** sahid has quit IRC15:49
*** raghavendrat has left #openstack-cinder15:55
*** jmlowe has quit IRC15:55
hemna_so the 6200 is causing this ?15:56
geguileohemna_: looks that way15:56
geguileohemna_: it seems that once you remove the lower 3 LUNs it goes an reassigns the others15:57
hemna_ok not much if anything we can do then15:57
hemna_why the hell would they do that?15:57
geguileohemna_: not really, because the reassigning is even breaking attached volumes15:57
hemna_all of the existing paths would fail on the host15:57
geguileohemna_: exactly15:57
hemna_w i t f15:57
hemna_that's bad mmmkay15:57
hemna_drugs are bad mmmkay15:58
geguileoyeah, because Linux doesn't automatically change the LUNs even when it receives the message from the backend15:58
hemna_changing luns are bad mmmkay15:58
geguileoThis is what I said back at the beginning of this session:16:00
geguileogeguileo | I can only think of the 3par backend changing the LUNs on its own16:00
geguileogeguileo | after the first 3 LUNs have been removed16:00
geguileogeguileo | but afaik that would break everything...16:00
geguileogeguileo | so I doubt that's happening16:00
geguileoand it turns out that's precisely what's happening  :-(16:01
hemna_I don't think the 3par changes luns16:01
hemna_the lun id is set at initialize_connection time and never changes for an existing vol export16:01
geguileohemna_: 3PAR doesn't do it16:01
geguileohemna_: I made the mistake of saying 3PAR instead of 620016:02
hemna_once it has the lun id assigned, it's never changed16:02
geguileohemna_: yeah, I was referring to the 6200 that's giving trouble16:02
*** mriedem is now known as mriedem_afk16:09
*** kukacz has quit IRC16:19
*** enriquetaso has joined #openstack-cinder16:40
*** carloss has joined #openstack-cinder16:42
openstackgerritHelen Walsh proposed openstack/cinder master: PowerMax Docs -  Short host and port group name changes  https://review.opendev.org/68269616:47
*** e0ne has joined #openstack-cinder16:55
*** pcaruana has joined #openstack-cinder16:55
*** zhubx has joined #openstack-cinder16:57
*** boxiang has quit IRC17:00
*** e0ne has quit IRC17:00
*** e0ne has joined #openstack-cinder17:01
KeithMnemonichemna: geguileo: thanks again, i am asking  that customer to have lenovo examine their config/FW on the 620017:02
hemna_ok good luck KeithMnemonic17:02
geguileoKeithMnemonic: no problem, yes, that would be best17:02
*** e0ne has quit IRC17:05
*** e0ne has joined #openstack-cinder17:13
*** e0ne has quit IRC17:17
*** ociuhandu has quit IRC17:25
*** jmlowe has joined #openstack-cinder17:25
*** enriquetaso has quit IRC17:26
*** e0ne has joined #openstack-cinder17:33
*** e0ne has quit IRC17:43
*** ociuhandu has joined #openstack-cinder17:44
*** ociuhandu has quit IRC17:48
*** mmethot_ has quit IRC17:53
*** mmethot_ has joined #openstack-cinder17:53
*** mriedem_afk is now known as mriedem17:53
*** jmlowe has quit IRC18:02
*** whoami-rajat has quit IRC18:12
*** enriquetaso has joined #openstack-cinder18:19
*** gregwork has joined #openstack-cinder18:31
*** jmlowe has joined #openstack-cinder18:32
gregworkwhat options are available when a volume snapshot is stuck in "Deleting" for a long time.18:32
gregworki have an app (cloudforms) which is going through and performing smart state analysis (snapshot instnace volume / analyze contents / discard snapshot) on a bunch of tenants and for whatever reason that process failed.  I have a pile of snapshots all sitting in "Deleting" now for a long time and im trying to clean them up.  the back end storage is ceph18:34
gregworkthe ones stuck in deleting have os-extended-snapshot-attributes:progress 0%18:34
hemna_did you initiate the snapshot delete ?18:37
gregworkit did18:37
*** henriqueof has joined #openstack-cinder18:38
gregworkbut it has been in this state for hours18:38
hemna_I'd check the c-vol logs18:38
gregworkls -la18:38
hemna_at the time the delete was issued18:38
gregworker wrong window sorry18:38
gregworkther eappear to be a bunch of sqlalchemy errors and lost connectivity to the back end mysql18:39
hemna_ceph driver complaints?18:40
gregworknothing at present ceph -s shows HEALTH_OK18:40
hemna_if the ceph driver just lost connectivity to the cluster, you can simple reset the state of the snaps, and try again18:40
gregworkcinder-shced says NoValidBackend: Cannot place snapshot on hostgroup@tripleo_ceph#tripleo_ceph18:42
gregworkthre is a bunch of that18:42
gregworkaround the same time18:42
hemna_anything else around that message?18:42
hemna_probably a reason why it can't place it18:42
*** hemna_ is now known as hemna_afk18:43
gregworkvolume service is down. host: hostgroup@tripleo_ceph18:43
hemna_afkthere yah go18:43
hemna_afkif the host is back up, then just reset state on the snaps and delete em18:43
hemna_afkok really afk18:43
gregworkhow do i reset the state18:43
gregworkalright ill look into it18:44
*** ociuhandu has joined #openstack-cinder18:52
*** ociuhandu has quit IRC18:56
*** e0ne has joined #openstack-cinder18:59
*** jmlowe has quit IRC19:08
mriedemeharney: smcginnis: can we get https://review.opendev.org/#/c/668130/ into devstack-plugin-ceph now so we do'nt regress any shelve testing in queens?19:17
mriedemthe nova change is merged19:17
eharneymriedem: looks good to me19:20
*** sahid has joined #openstack-cinder19:36
*** mmethot_ has quit IRC19:37
*** mmethot_ has joined #openstack-cinder19:37
*** e0ne has quit IRC19:38
*** mmethot_ has quit IRC19:40
*** mmethot_ has joined #openstack-cinder19:41
*** spsurya has quit IRC19:48
*** jmlowe has joined #openstack-cinder19:48
*** sahid has quit IRC19:50
*** e0ne has joined #openstack-cinder19:50
*** senrique_ has joined #openstack-cinder20:09
*** enriquetaso has quit IRC20:12
*** whfnst has quit IRC20:15
*** mriedem has quit IRC20:32
*** mriedem has joined #openstack-cinder20:32
*** zhubx has quit IRC20:41
*** boxiang has joined #openstack-cinder20:41
*** trident has quit IRC20:48
*** mmethot_ has quit IRC20:48
*** mmethot has joined #openstack-cinder20:49
*** ociuhandu has joined #openstack-cinder20:53
*** mmethot has quit IRC20:53
*** mmethot has joined #openstack-cinder20:54
*** mmethot has quit IRC20:55
*** eharney has quit IRC20:55
*** mmethot has joined #openstack-cinder20:55
*** pcaruana has quit IRC20:58
*** trident has joined #openstack-cinder21:00
*** idlemind has joined #openstack-cinder21:10
*** mmethot has quit IRC21:13
*** mmethot has joined #openstack-cinder21:14
*** e0ne has quit IRC21:19
*** e0ne has joined #openstack-cinder21:28
*** e0ne has quit IRC21:28
*** senrique_ has quit IRC21:29
*** ociuhandu has quit IRC21:33
*** markvoelker has quit IRC21:35
jungleboyjrosmaita:  Were you going to do the forum topic submissions?21:59
rosmaitajungleboyj: yes, i think they are due on friday?21:59
jungleboyjThat sounds right.  Ok, just wanted to make sure you were planning to do that so it didn't get missed.22:00
jungleboyjThank you.22:02
*** hemna has joined #openstack-cinder22:09
*** lseki has quit IRC22:45
*** mriedem has quit IRC22:52
*** hemna has quit IRC22:57
*** tkajinam has joined #openstack-cinder23:04
*** zhubx has joined #openstack-cinder23:09
*** boxiang has quit IRC23:10
*** tosky has quit IRC23:11
*** markvoelker has joined #openstack-cinder23:22
*** n-saito has joined #openstack-cinder23:36
*** rcernin has joined #openstack-cinder23:44

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!