Sunday, 2017-11-12

*** kozy has quit IRC00:00
*** kozy has joined #openstack00:00
*** Manor has quit IRC00:01
*** achadha has joined #openstack00:02
*** fragatina has joined #openstack00:02
*** achadha_ has joined #openstack00:03
*** salv-orlando has joined #openstack00:04
*** lkoranda has quit IRC00:06
*** ChubYann has quit IRC00:06
*** achadha has quit IRC00:07
*** achadha_ has quit IRC00:08
*** salv-orlando has quit IRC00:09
*** lkoranda has joined #openstack00:09
*** ginsul has joined #openstack00:10
*** brault has joined #openstack00:13
*** ginsul has quit IRC00:15
*** ginsul has joined #openstack00:17
*** dsneddon has quit IRC00:17
*** brault has quit IRC00:17
*** ginsul has quit IRC00:21
*** jeroen92 has joined #openstack00:23
*** jeroen92 has quit IRC00:27
*** pblaho has quit IRC00:34
*** bobh has joined #openstack00:46
*** bobh has quit IRC00:51
*** yamamoto has joined #openstack00:55
*** hippiepete has quit IRC00:59
*** yamamoto has quit IRC01:00
*** hoonetorg has quit IRC01:00
*** nshetty has joined #openstack01:05
*** ginsul has joined #openstack01:09
*** TMM has joined #openstack01:11
*** TMM has joined #openstack01:11
*** hoonetorg has joined #openstack01:13
*** ginsul has quit IRC01:14
*** ginsul has joined #openstack01:15
*** forsook has quit IRC01:16
*** ginsul has quit IRC01:20
*** forsook has joined #openstack01:22
*** GenteelBen has quit IRC01:33
*** iniazi_ has joined #openstack01:43
*** iniazi has quit IRC01:46
*** TMM has quit IRC01:46
*** bobh has joined #openstack01:47
*** lucendio has quit IRC01:48
*** TMM has joined #openstack01:48
*** TMM has joined #openstack01:48
*** linkmark has quit IRC01:51
*** bobh has quit IRC01:51
*** SmearedBeard has quit IRC01:56
*** rvd has joined #openstack02:02
*** yamamoto has joined #openstack02:08
*** achadha has joined #openstack02:19
*** yamamoto has quit IRC02:19
*** ginsul has joined #openstack02:20
*** achadha has quit IRC02:23
*** ginsul has quit IRC02:25
*** albertom has quit IRC02:27
*** Saul_ has joined #openstack02:38
Saul_Anyone alive?02:38
*** Saul_ has quit IRC02:44
*** fragatina has quit IRC02:55
*** galstrom_zzz is now known as galstrom03:05
*** salv-orlando has joined #openstack03:06
*** ginsul has joined #openstack03:09
*** shoogz has joined #openstack03:10
*** salv-orlando has quit IRC03:10
*** shoogz has joined #openstack03:11
*** ginsul has quit IRC03:14
*** snowman4839 has quit IRC03:25
*** nshetty is now known as nshetty|afk03:29
*** ginsul has joined #openstack03:30
*** Sigyn has quit IRC03:31
*** ginsul has quit IRC03:35
*** Sigyn has joined #openstack03:38
*** slyver1 has joined #openstack03:40
*** jamesbenson has joined #openstack03:40
*** slyver has quit IRC03:42
*** bobh has joined #openstack03:49
*** galstrom is now known as galstrom_zzz03:51
*** freebullets_ has joined #openstack03:51
*** freebullets has quit IRC03:51
*** bobh has quit IRC03:54
*** ginsul has joined #openstack03:54
*** jamesbenson has quit IRC03:57
*** Manor has joined #openstack03:58
*** ginsul has quit IRC03:59
*** Manor has quit IRC04:01
*** snowman4839 has joined #openstack04:04
*** smccarthy has joined #openstack04:05
*** led_belly has joined #openstack04:05
*** salv-orlando has joined #openstack04:07
*** salv-orlando has quit IRC04:11
*** brault has joined #openstack04:14
*** rwsu has quit IRC04:16
*** JD|cloud has quit IRC04:17
*** brault has quit IRC04:18
*** ginsul has joined #openstack04:19
*** achadha has joined #openstack04:23
*** ginsul has quit IRC04:24
*** jamesbenson has joined #openstack04:24
*** jeroen92 has joined #openstack04:25
*** achadha has quit IRC04:27
*** jeroen92 has quit IRC04:30
*** geaaru has quit IRC04:32
*** nshetty|afk is now known as nshetty04:33
*** uZiel has joined #openstack04:40
*** geaaru has joined #openstack04:43
*** JD|cloud has joined #openstack04:47
*** Xiti has quit IRC04:48
*** bobh has joined #openstack04:50
*** bobh has quit IRC04:54
*** yamamoto has joined #openstack04:55
*** akshays has joined #openstack04:57
*** yamamoto has quit IRC04:59
*** achadha has joined #openstack04:59
*** achadha_ has joined #openstack05:01
*** ginsul has joined #openstack05:02
*** Xiti has joined #openstack05:02
*** achadha has quit IRC05:03
*** achadha_ has quit IRC05:05
*** smccarth_ has joined #openstack05:05
*** ginsul has quit IRC05:06
*** bkopilov has joined #openstack05:06
*** pedh has joined #openstack05:07
*** pedh has quit IRC05:08
*** smccarthy has quit IRC05:08
*** yamamoto has joined #openstack05:10
*** yamamoto has quit IRC05:14
*** nanga has quit IRC05:17
*** nanga has joined #openstack05:17
*** nanga has joined #openstack05:17
*** yamamoto has joined #openstack05:25
*** yamamoto has quit IRC05:29
*** edleafe- has joined #openstack05:31
*** ginsul has joined #openstack05:32
*** edleafe has quit IRC05:34
*** jamesbenson has quit IRC05:35
*** achadha has joined #openstack05:36
*** ginsul has quit IRC05:37
*** smccarth_ has quit IRC05:37
*** yamamoto has joined #openstack05:40
*** achadha has quit IRC05:40
*** achadha has joined #openstack05:41
*** achadha_ has joined #openstack05:42
*** yamamoto has quit IRC05:44
*** achadha has quit IRC05:46
*** achadha has joined #openstack05:47
*** achadha_ has quit IRC05:47
*** jamesbenson has joined #openstack05:48
*** achadha has quit IRC05:51
*** smccarthy has joined #openstack05:51
*** smccarthy has quit IRC05:56
*** yamamoto has joined #openstack05:59
*** yamamoto has quit IRC05:59
*** aossama has quit IRC06:02
*** Manor has joined #openstack06:04
*** smccarthy has joined #openstack06:05
*** Manor has quit IRC06:09
*** salv-orlando has joined #openstack06:09
*** smccarthy has quit IRC06:10
*** yamamoto has joined #openstack06:11
*** dxiri_ has joined #openstack06:12
*** cah_link has joined #openstack06:13
*** salv-orlando has quit IRC06:13
*** dxiri has quit IRC06:15
*** yamamoto has quit IRC06:15
*** achadha has joined #openstack06:16
*** achadha has quit IRC06:21
*** fragatina has joined #openstack06:21
*** achadha has joined #openstack06:21
*** rtjure has quit IRC06:22
*** achadha has quit IRC06:25
*** jeroen92 has joined #openstack06:26
*** smccarthy has joined #openstack06:29
*** timothyb89_ is now known as timothyb8906:29
*** rtjure has joined #openstack06:30
*** jeroen92 has quit IRC06:31
*** smccarth_ has joined #openstack06:32
*** khyr0n has joined #openstack06:33
*** smccarthy has quit IRC06:34
*** smccarth_ has quit IRC06:38
*** jamesbenson has quit IRC06:43
*** ginsul has joined #openstack06:49
*** ginsul has quit IRC06:54
*** Yarboa has joined #openstack06:55
*** slyver1 is now known as slyver07:01
*** smccarthy has joined #openstack07:03
*** iranzo has joined #openstack07:04
*** smccarthy has quit IRC07:08
*** achadha has joined #openstack07:08
*** achadha_ has joined #openstack07:09
*** Yarboa has quit IRC07:09
*** achadha__ has joined #openstack07:10
*** yamamoto has joined #openstack07:13
*** achadha has quit IRC07:13
*** achadha_ has quit IRC07:14
*** achadha__ has quit IRC07:15
*** pedh has joined #openstack07:21
*** yamamoto has quit IRC07:21
*** ginsul has joined #openstack07:22
*** Yarboa has joined #openstack07:24
*** xinliang has quit IRC07:25
*** uZiel has quit IRC07:26
*** ginsul has quit IRC07:26
*** jeroen92 has joined #openstack07:27
*** pblaho has joined #openstack07:28
*** jeroen92 has quit IRC07:32
*** xinliang has joined #openstack07:38
*** ginsul has joined #openstack07:43
*** armaan has joined #openstack07:44
*** uZiel has joined #openstack07:47
*** pedh has quit IRC07:48
*** uZiel has quit IRC07:48
*** ginsul has quit IRC07:48
*** ginsul has joined #openstack07:50
*** uZiel has joined #openstack07:52
*** uZiel has quit IRC07:53
*** fragatina has quit IRC07:54
*** nshetty is now known as nshetty|lunch07:55
*** ginsul has quit IRC07:55
*** uZiel has joined #openstack07:57
*** jtomasek has joined #openstack07:58
*** teej has quit IRC08:01
*** snowman4839 has quit IRC08:02
*** jtomasek has quit IRC08:09
*** salv-orlando has joined #openstack08:10
*** brault has joined #openstack08:14
*** salv-orlando has quit IRC08:15
*** jtomasek has joined #openstack08:16
*** Yarboa has quit IRC08:17
*** agurenko has joined #openstack08:18
*** brault has quit IRC08:19
*** dparkes has quit IRC08:20
*** ginsul has joined #openstack08:21
*** bobh has joined #openstack08:22
*** Yarboa has joined #openstack08:26
*** ginsul has quit IRC08:26
*** bobh has quit IRC08:27
*** rvd has quit IRC08:33
*** uZiel has quit IRC08:34
*** uZiel has joined #openstack08:35
*** SmearedBeard has joined #openstack08:37
*** jtomasek has quit IRC08:42
*** nshetty|lunch is now known as nshetty08:46
*** andries has joined #openstack08:54
*** achadha has joined #openstack09:00
*** achadha has quit IRC09:04
*** armaan has quit IRC09:08
*** armaan has joined #openstack09:09
*** gringao has quit IRC09:10
*** uZiel has quit IRC09:11
*** uZiel has joined #openstack09:11
*** salv-orlando has joined #openstack09:11
*** Douhet has quit IRC09:12
*** Douhet has joined #openstack09:12
*** uZiel has quit IRC09:13
*** donghao has joined #openstack09:14
*** SteffanW has joined #openstack09:15
*** salv-orlando has quit IRC09:16
*** uZiel has joined #openstack09:16
*** armaan has quit IRC09:22
*** armaan has joined #openstack09:22
*** bobh has joined #openstack09:23
*** rodolof has joined #openstack09:23
*** bobh has quit IRC09:27
*** SteffanW has quit IRC09:29
*** SteffanW has joined #openstack09:29
*** armaan has quit IRC09:31
*** yamamoto has joined #openstack09:31
*** Yarboa has quit IRC09:32
*** nshetty has quit IRC09:35
*** gringao has joined #openstack09:37
*** cyborg-one has quit IRC09:41
*** SmearedBeard has quit IRC09:42
*** SteffanW has joined #openstack09:45
*** Yarboa has joined #openstack09:46
*** snowman4839 has joined #openstack09:48
*** SmearedBeard has joined #openstack09:57
*** uZiel has quit IRC09:58
*** visibilityspots has quit IRC09:58
*** visibilityspots has joined #openstack09:58
*** SmearedBeard has quit IRC10:02
*** salv-orlando has joined #openstack10:12
*** salv-orlando has quit IRC10:16
*** rvd has joined #openstack10:21
*** snowman4839 has quit IRC10:23
*** yamamoto has quit IRC10:26
*** etingof has quit IRC10:31
*** yamamoto has joined #openstack10:31
*** yamamoto has quit IRC10:33
*** yamamoto has joined #openstack10:33
*** yamamoto has quit IRC10:33
*** lkolstad has joined #openstack10:37
*** etingof has joined #openstack10:45
*** pedh has joined #openstack10:46
*** yamamoto has joined #openstack10:47
*** bkopilov has quit IRC10:48
*** yamamoto has quit IRC10:51
*** m|y|k has joined #openstack10:54
*** yamamoto has joined #openstack11:02
gunixhey guys, how do you deal with MDS failover issues on ceph? cause it has like 2 mins downtime if your active one fails11:02
*** yamamoto has quit IRC11:06
*** salv-orlando has joined #openstack11:13
*** SmearedBeard has joined #openstack11:13
*** pedh has quit IRC11:14
*** brault has joined #openstack11:16
*** yamamoto has joined #openstack11:17
*** salv-orlando has quit IRC11:17
*** AlexeyAbashkin has joined #openstack11:19
*** yamamoto has quit IRC11:21
*** SmearedBeard has quit IRC11:22
*** brault has quit IRC11:23
*** ginsul has joined #openstack11:25
*** jeroen92 has joined #openstack11:29
*** jonher has quit IRC11:29
*** thorre has quit IRC11:31
*** yamamoto has joined #openstack11:32
*** jonher has joined #openstack11:34
*** jeroen92 has quit IRC11:34
*** yamamoto has quit IRC11:37
*** thorre has joined #openstack11:37
*** yamamoto has joined #openstack11:47
*** yamamoto has quit IRC11:51
*** AlexeyAbashkin has quit IRC12:02
*** e0ne has joined #openstack12:04
*** e0ne has quit IRC12:04
*** TMM has quit IRC12:06
*** GenteelBen has joined #openstack12:12
*** salv-orlando has joined #openstack12:13
*** gringao has quit IRC12:17
*** yamamoto has joined #openstack12:17
*** gringao has joined #openstack12:18
*** salv-orlando has quit IRC12:18
*** yamamoto has quit IRC12:21
*** AlexeyAbashkin has joined #openstack12:25
*** bobh has joined #openstack12:25
*** sree has joined #openstack12:25
*** AlexeyAbashkin has quit IRC12:29
*** bobh has quit IRC12:29
*** sapd__ has quit IRC12:30
*** sapd__ has joined #openstack12:31
*** yamamoto has joined #openstack12:32
*** fragatina has joined #openstack12:34
*** yamamoto has quit IRC12:36
*** jmlowe_ has joined #openstack12:40
*** jmlowe has quit IRC12:42
*** ChubYann has joined #openstack12:43
*** sree has quit IRC12:44
*** BenderRodriguez has quit IRC12:55
*** sipior has joined #openstack12:58
*** yamamoto has joined #openstack12:59
*** yamamoto has quit IRC12:59
*** bkopilov has joined #openstack13:02
*** raynold has quit IRC13:03
*** salv-orlando has joined #openstack13:14
*** salv-orlando has quit IRC13:19
*** AlexeyAbashkin has joined #openstack13:24
*** cah_link1 has joined #openstack13:25
*** yamamoto has joined #openstack13:26
*** bobh has joined #openstack13:26
*** cah_link has quit IRC13:26
*** cah_link1 is now known as cah_link13:26
*** AlexeyAbashkin has quit IRC13:28
*** lucendio has joined #openstack13:29
*** bobh has quit IRC13:31
*** yamamoto has quit IRC13:31
*** gszasz has joined #openstack13:36
*** salmankhan has joined #openstack13:38
*** ginsul_ has joined #openstack13:50
*** ginsul has quit IRC13:50
*** ginsul_ has quit IRC13:52
*** e0ne has joined #openstack13:52
*** salmankhan has quit IRC13:53
*** cnewcomer has quit IRC13:54
*** rodolof has quit IRC13:54
*** fragatina has quit IRC13:57
*** salmankhan has joined #openstack13:57
*** yamamoto has joined #openstack13:57
*** scoopex has left #openstack13:59
*** cnewcomer has joined #openstack14:00
*** salmankhan has quit IRC14:02
*** jmlowe_ has quit IRC14:10
*** iranzo1 has joined #openstack14:11
*** cah_link has quit IRC14:21
*** cah_link1 has joined #openstack14:21
*** cah_link1 is now known as cah_link14:23
*** AlexeyAbashkin has joined #openstack14:24
*** bobh has joined #openstack14:27
*** Son_Goku has quit IRC14:28
*** AlexeyAbashkin has quit IRC14:28
*** dhill_ has quit IRC14:31
*** bobh has quit IRC14:31
*** sree has joined #openstack14:33
*** Son_Goku has joined #openstack14:34
*** gvrangan has joined #openstack14:37
*** dhill_ has joined #openstack14:37
*** sree has quit IRC14:38
*** gvrangan has quit IRC14:38
*** SmearedBeard has joined #openstack14:38
*** cah_link has quit IRC14:38
*** yamamoto has quit IRC14:41
*** Smeared_Beard has joined #openstack14:42
*** m|y|k has quit IRC14:43
*** yamamoto has joined #openstack14:43
*** SmearedBeard has quit IRC14:44
*** krypto has joined #openstack14:47
*** rvd has quit IRC14:49
*** yamamoto has quit IRC14:52
*** m|y|k has joined #openstack14:55
*** ginsul has joined #openstack15:00
*** khyr0n has quit IRC15:04
*** khyr0n has joined #openstack15:05
*** ginsul has quit IRC15:05
*** mdih has joined #openstack15:08
*** m|y|k has quit IRC15:08
*** khyr0n has quit IRC15:09
*** m|y|k has joined #openstack15:10
*** bobh has joined #openstack15:10
*** bobh has quit IRC15:11
*** Anarchemist has quit IRC15:11
*** teej has joined #openstack15:11
*** bobh has joined #openstack15:13
*** salv-orlando has joined #openstack15:16
*** bobh has quit IRC15:17
*** Anarchemist has joined #openstack15:18
*** salv-orlando has quit IRC15:20
*** mdih has quit IRC15:21
*** rvd has joined #openstack15:21
*** bobh has joined #openstack15:22
*** iranzo1 has quit IRC15:23
*** bobh has quit IRC15:24
*** M6HZ has joined #openstack15:25
*** jeroen92 has joined #openstack15:26
*** ginsul has joined #openstack15:26
*** bobh has joined #openstack15:28
*** donghao has quit IRC15:29
*** racedo has joined #openstack15:30
*** jeroen92 has quit IRC15:31
*** racedo has quit IRC15:32
*** bobh has quit IRC15:34
*** bobh_ has joined #openstack15:34
*** Smeared_Beard has quit IRC15:35
gunixguys?15:37
*** gvrangan has joined #openstack15:38
*** bobh_ has quit IRC15:42
*** bobh has joined #openstack15:42
*** BenderRodriguez has joined #openstack15:42
*** Smeared_Beard has joined #openstack15:43
*** krypto has quit IRC15:46
*** krypto has joined #openstack15:47
*** SmearedB_ has joined #openstack15:47
*** racedo has joined #openstack15:49
*** Smeared_Beard has quit IRC15:49
*** gvrangan has quit IRC15:52
*** cah_link has joined #openstack15:53
*** racedo has quit IRC15:54
*** SmearedB_ has quit IRC15:56
*** SmearedBeard has joined #openstack15:56
*** iniazi has joined #openstack15:57
*** iniazi_ has quit IRC16:00
*** Smeared_Beard has joined #openstack16:00
*** salv-orlando has joined #openstack16:02
*** SmearedBeard has quit IRC16:02
*** dxiri has joined #openstack16:04
*** SmearedBeard has joined #openstack16:05
*** Smeared_Beard has quit IRC16:06
*** bobh has quit IRC16:07
*** dxiri_ has quit IRC16:07
*** bobh_ has joined #openstack16:07
*** kylek3h has quit IRC16:09
*** sree has joined #openstack16:16
*** bobh_ has quit IRC16:17
*** bobh has joined #openstack16:17
*** SmearedBeard has quit IRC16:18
*** salmankhan has joined #openstack16:20
*** gszasz has quit IRC16:20
*** sree has quit IRC16:21
*** lkolstad has quit IRC16:23
*** salmankhan has quit IRC16:24
*** gvrangan has joined #openstack16:29
*** lucendio has quit IRC16:31
*** lucendio has joined #openstack16:32
*** dxiri has quit IRC16:33
*** dxiri has joined #openstack16:33
*** lucendio has quit IRC16:34
*** Yarboa has quit IRC16:36
*** dxiri has joined #openstack16:37
*** lucendio has joined #openstack16:41
*** racedo has joined #openstack16:41
*** uZiel has joined #openstack16:45
*** racedo has quit IRC16:50
*** bobh has quit IRC16:50
*** bobh has joined #openstack16:50
iggygunix: maybe you meant to ask in #ceph?16:51
*** gringao has quit IRC16:59
*** agurenko has quit IRC17:09
*** gringao has joined #openstack17:12
*** gvrangan has quit IRC17:14
*** bobh has quit IRC17:14
*** iranzo has quit IRC17:16
*** gvrangan has joined #openstack17:18
*** cah_link1 has joined #openstack17:19
*** cah_link has quit IRC17:20
*** cah_link1 is now known as cah_link17:20
*** rvd has quit IRC17:21
*** brault has joined #openstack17:21
*** racedo has joined #openstack17:22
*** bobh has joined #openstack17:23
*** Cybodog has quit IRC17:24
*** racedo has quit IRC17:24
*** AlexeyAbashkin has joined #openstack17:24
*** cah_link has quit IRC17:25
*** forsook has quit IRC17:26
*** brault has quit IRC17:26
*** Cybodog has joined #openstack17:28
*** AlexeyAbashkin has quit IRC17:29
*** forsook has joined #openstack17:31
*** zkassab has joined #openstack17:33
*** zkassab has quit IRC17:34
gunixiggy: #ceph is sadly a dead channel. there is nobody there actually caring about answering.17:43
gunixiggy: considering ceph is present in half of the openstack deployments, i have more chances to get an answer by asking here17:43
*** hackoo has joined #openstack17:48
*** brault has joined #openstack17:49
hackooI am facing an error with Keystone while creating first project. I tried multiple times with different ways to setup. Now doing manual setup using Opnestack docs but still facing same issue. Error says:   Discovering versions from the identity service failed when creating the password plugin. Attempting to determine version from URL.17:50
hackooPlease let me know if any pointers to solve this.17:51
yankcrimegunix the bits of ceph that are fundamental to most openstack deployments are block and object - i think file is a lot less common17:51
yankcrimethe official ceph irc channel isn't on freenode btw17:52
yankcrimeit's on OFTC17:52
*** uZiel has quit IRC17:53
*** kbyrne has quit IRC17:54
*** kbyrne has joined #openstack17:54
*** salv-orlando has quit IRC17:55
*** salv-orlando has joined #openstack17:56
gunixyankcrime: why are you telling me this? is the MDS required only for file?17:56
yankcrimegunix correct17:56
yankcrimeif you're only using block and object, you don't need mds17:56
*** salv-orlando has quit IRC18:00
*** vquicksilver has quit IRC18:00
*** forsook has quit IRC18:05
*** forsook has joined #openstack18:11
*** adreznec has quit IRC18:16
*** mcornea has joined #openstack18:17
*** mcornea_ has joined #openstack18:17
*** adreznec has joined #openstack18:17
*** mcornea has quit IRC18:18
*** mcornea has joined #openstack18:19
*** vquicksilver has joined #openstack18:21
*** st4ckgneer has joined #openstack18:22
*** mcornea has quit IRC18:23
*** AlexeyAbashkin has joined #openstack18:25
*** raynold has joined #openstack18:26
*** salv-orlando has joined #openstack18:26
*** AlexeyAbashkin has quit IRC18:29
*** hackoo has quit IRC18:33
*** slaweq has joined #openstack18:34
*** geaaru has quit IRC18:35
*** Jadoo has joined #openstack18:36
*** fragatina has joined #openstack18:36
*** geaaru has joined #openstack18:36
*** slaweq has quit IRC18:39
*** Jadoo has quit IRC18:39
*** fragatina has quit IRC18:40
*** mdih has joined #openstack18:40
*** fragatina has joined #openstack18:41
*** mcornea has joined #openstack18:44
*** mcornea has quit IRC18:44
*** mcornea has joined #openstack18:44
*** mcornea has quit IRC18:45
*** SmearedBeard has joined #openstack18:48
*** mcornea has joined #openstack18:49
*** krypto has quit IRC18:50
*** mcornea has quit IRC18:51
*** mcornea has joined #openstack18:53
*** mcornea has quit IRC18:55
*** Calvin` has quit IRC19:04
*** dparkes has joined #openstack19:07
*** mdih has quit IRC19:10
*** gvrangan has quit IRC19:19
gunixyankcrime: oh, cool!19:25
gunixyankcrime: so what i need is 2 nodes for OSDs, with 2 OSDs each (4 OSDs in total) and 3 monitors ... right?19:27
*** mcornea has joined #openstack19:27
*** mcornea has quit IRC19:29
*** lagagain has quit IRC19:29
*** HTTP_____GK1wmSU has joined #openstack19:29
*** HTTP_____GK1wmSU has quit IRC19:31
*** smccarthy has joined #openstack19:33
*** zerick has quit IRC19:36
*** Calvin` has joined #openstack19:37
iggyif you don't want to lose data, you want 3 OSDs... and if you don't want your cluster to be innaccessible when you lose one OSD box, you want 3 of those too19:42
*** mcornea_ has quit IRC19:43
*** dmibrid_ has joined #openstack19:45
*** jeroen92 has joined #openstack19:45
*** lucendio has quit IRC19:51
*** lucendio1 has joined #openstack19:51
*** livelace-link has quit IRC19:51
*** snowman4839 has joined #openstack19:52
*** hybridpollo has joined #openstack19:57
gunixiggy: why do you need 3 OSD when you have 3 monitors? aren't the monitors the ones doing stonith?20:07
*** smccarthy has quit IRC20:08
*** slaweq has joined #openstack20:10
*** RamJett has quit IRC20:13
*** ghoalee has joined #openstack20:16
*** daynaskully has quit IRC20:17
iggythere is no stonith in ceph, and no the mon's don't mediate data replication... the OSDs do that all themselves20:17
*** slaweq_ has joined #openstack20:18
gunixiggy: so if i have 3 osds and one OSD gets disconnected from the other two, it will continue to write data?20:18
gunixon it's own20:18
gunix?20:18
*** slaweq_ has quit IRC20:18
*** iranzo has joined #openstack20:19
iggyno, but if you have only 2 (with 2 replicas), each one will write independently thinking they can20:19
gunixiggy: ok,so if you have 3, and one loses connection, it stops ... isn't that stonith?20:20
iggyno... stonith literally means shoot the other node in the head20:20
iggyi.e. turning off the disconnected node in some way20:21
gunixok so what ceph does is basic fencing, without stonith? or not even that?20:21
iggyOSDs will only accept writes if they have quorom20:22
gunixand ... that is not fencing?20:23
iggybut if you have replicas=2 each one will think it has quorom20:23
*** AlexeyAbashkin has joined #openstack20:23
iggyin any case, ceph 101 is way off topic for this channel20:24
*** swebb has quit IRC20:24
gunixyou can join #ceph if you want and help me on that channel20:25
gunixpeople there don't talk20:25
gunixand i find kind of brutal at the start20:25
gunixmaybe it doesn't hurt so much after the first try20:25
*** VW has joined #openstack20:26
*** AlexeyAbashkin has quit IRC20:28
*** salv-orl_ has joined #openstack20:28
*** smccarthy has joined #openstack20:29
*** slaweq has quit IRC20:31
*** salv-orlando has quit IRC20:32
*** SmearedBeard has quit IRC20:35
*** smccarthy has quit IRC20:37
*** smccarthy has joined #openstack20:38
*** e0ne has quit IRC20:43
*** SmearedBeard has joined #openstack20:44
*** sree has joined #openstack20:49
*** sree has quit IRC20:54
*** swebb has joined #openstack20:56
*** VW has quit IRC21:01
*** dmibrid_ has quit IRC21:08
*** dmibrid_ has joined #openstack21:10
*** BenderRodriguez has quit IRC21:11
*** snowman4839 has quit IRC21:13
*** jackNemrod has joined #openstack21:13
*** Yarboa has joined #openstack21:17
*** ghoalee has quit IRC21:21
*** dhill_ has quit IRC21:21
*** dhill_ has joined #openstack21:23
iggyI'm in the ceph channel (the proper one anyway)21:24
gunix? which one is the proper one?21:24
*** snowman4839 has joined #openstack21:25
yankcrime[17:52:12]  <yankcrime>it's on OFTC21:25
yankcrimegunix ^21:26
*** m|y|k has quit IRC21:26
*** dmibrid_ has quit IRC21:33
*** fragatina has quit IRC21:34
*** dneary has joined #openstack21:34
*** Manor has joined #openstack21:34
*** fragatina has joined #openstack21:34
*** dmibrid has joined #openstack21:35
SamYapleiggy: 2 replicas each one *wont* think it has quorum. quorum is defined as >50%, with two copies both must be online to have quorum. so thats not a risk21:35
SamYaplewhich talkingabout quorum at the osdlevel isnt really accurate anyway sincethats not where it lives. thats entirely controlled by size/min_size in ceph21:36
*** vishwanathj has quit IRC21:36
SamYaplehaving size=2 and min_size=1 might lead to the situation you aretalkingabout, but the monitors (who maintain quorum) will prevent different osds accepting different writes like that from happening21:37
gunixSamYaple: so quorum is maintained by monitors, not buy OSDs?21:38
gunixdo the OSD numbers count for quorum?21:38
SamYaplegunix: the osds talk to the monitors to learn thier place in the cluster. the monitors maintain quorum21:38
gunixSamYaple: thank you! that explins why i see people having 2 OSD nodes and 3 monitor nodes without any worries21:39
iggyyou are right, it was the 2/1 case I was referring to21:39
SamYaplewell, i wouldnt go that far "without any worries", but yea21:39
SamYaplei wouldnever do less than 3 copies with a 2 min_size21:39
SamYapleiggy: yea 2/1 min_size definetely opens the door to datacorruption issues for sure21:40
gunixSamYaple: would you keep monitors on controllers or on ceph nodes?21:41
SamYaplegunix: i typically do hyperconveraged. two types of nodes, controllers and computes. controllers have mons, computes have storage21:41
SamYaplenever run osds and mons on the same nodes21:41
gunixthat kind of makes sence, cause the VMs will have the storage on the same node and you reduce network traffic a lot21:42
SamYaplegunix: *potentially* it will have some parts ofthe storage on the same node21:42
gunixso your basic setup is 3 computes and 3 controllers... and you scale by adding controllers when required?21:43
SamYaplebut it doesnt really reduce traffic21:43
SamYaplei dont normally scale past 3 controllers, no21:43
gunixSamYaple: if you have only 3 controllers, don't you get one replica on each node with default setup?21:43
gunix*if you have 3 COMPUTES21:43
gunixyea, i messed up that scaling question, rewriting ...21:44
gunixso your basic setup is 3 computes and 3 controllers... and you scale by adding COMPUTES when required?21:44
SamYaplesure you do, one on each21:44
gunix...21:44
SamYaplebut maybe not the primary one21:44
SamYaplewhich means the read still crosses the network21:44
SamYaple(that is up for futureoptimization though)21:44
gunixhow much RAM do you need on the controller nodes?21:45
SamYapledepends on the number of osds, what elseyoure running,etc, etc21:45
gunixwill 3 nodes with 24 GB ram do for a small setup?21:46
SamYaplewith a 3 node setup, you could probably get away with 16GB of ram to run everything21:47
SamYaple16gb per controller21:47
gunixyea21:47
SamYaplethe compute ram would depend on the amount of storage, 1GBx1TB of storage is recommended21:47
gunixso if you get like 10 computes and around 400 instances, you probably need 120 GB ram i guess21:47
SamYapleon hte mons?21:48
gunix120 GB on the controller+mon nodes, yes21:48
*** threestrands has joined #openstack21:48
SamYapleyoull be hardpressed to break 64GB on such a small scale21:48
gunixit scales more with storage, rather than with number of vxlan networks & instances?21:48
SamYapleremember with ceph each bit is exponetially more storage, not more ram. so not quite21:49
gunix*the required number of ram, i mean21:49
SamYapleanyway 128GB controller nodes should last you up to several hundred compute node clusters21:49
gunixhundred compute nodes?21:49
SamYapleyea21:50
gunixwow21:50
SamYapleyoull run into 1000's of other issues before ram istheproblem21:50
SamYapleprobably21:50
gunixso that means potentialy 1000 projects, each with on cxlans and tons of instances?21:50
gunixs/vxlan/vxlan/g21:50
gunix* s/cxlan/vxlan/g21:50
SamYapleoh now youre asking other questions. ive not see 500+ network namespaces work so well before21:51
SamYapleregardless of resources21:51
SamYaplethenumber of instances doesnt matter, the number of networks/routers will21:51
gunixwhat do they stress out? proc/network devices?21:51
SamYapleneutron code is literally calling `ip netns ls` and processing that every few seconds21:52
*** Cybodog has quit IRC21:52
gunixSamYaple: so you rather want a few big customers, instead of a lot of small projects21:53
gunixmaybe limit quota of virtual networks to ... 1 ... :)) if customers are small21:53
gunix* or tenants, not customers. w/e.21:54
SamYapleor scale out the number of nodes you have ahndling dhcp/l3 traffic, sure21:54
gunixdoes it work to just add controllers?21:54
SamYapleit works by running more copies of the agents, if that is "adding controllers" in however you are deploying openstack, then yes21:55
gunixhmm ... or not really controllers, i guess you can only add more nodes for neutron without scaling rabbitmq and galera and everything21:55
gunixno, you are right. it makes no sense to spam all services, when you only need more neutron agents21:55
SamYaplegalera and ceph-mon are the *only* things that need 3 copies to work right21:55
SamYapleeverything else is HA at two copies21:55
gunixbut AFAIK neutron agents use VRRP21:55
gunixand they don't balance the load, you have 1 active router21:56
SamYapleonly for l3HA21:56
SamYaplethe default "legacy" router has no HA21:56
*** ginsul has quit IRC21:56
gunixThe Networking (neutron) service L3 agent is scalable, due to the scheduler that supports Virtual Router Redundancy Protocol (VRRP) to distribute virtual routers across multiple nodes.21:57
gunixfrom: https://docs.openstack.org/ha-guide/networking-ha-l3.html21:57
*** ginsul has joined #openstack21:57
gunixi can't figure this out. if it uses VRRP, how can it load balance?21:57
SamYaplesee the last few letters of that url21:57
SamYaplevrro hasnothing to do with loadbalancing?21:57
SamYaplevrrp21:58
gunixi will write an example, maybe i am really badly confused here21:58
SamYaplerouterA and routerB each have individual ip addresses21:58
SamYaplethey want to make a single ip new highly available (a gateway).21:58
gunixso lets' say you have 3 controllers: .11 .12 .13 ... and 3 computes: .21 .22. .23 ... afaik if the controllers have the routers, they will get an IP with VRRP, like .10, and when 1 node dies, the other will take over that IP. and the router is actually on .10 ... am i missing something?21:59
SamYaplethrough some math, one of them becomes the "leader" and advertisise that it holds teh vip21:59
SamYapleif it goes down, the other will advertise it holds the vip21:59
SamYapleno load balancing21:59
SamYaplethats l3ha21:59
SamYaplenot the default21:59
gunixhmm.22:00
gunixso what's the default?22:00
SamYaplethe default "legacy" router has no HA22:00
gunixso how do you load balance by adding  more network angets on new network nodes??22:00
SamYaplethat doesnt exist?22:01
gunixthat's what's confusing me. if they use VRRP, how does it help to get 7 agents instead of 3 agents22:01
SamYaplethats not a thing22:01
SamYaplethats not what vrrp is about22:01
gunixmeans i missread something you wrote earlier22:01
SamYapleyou are for sure22:01
SamYapleits about always being available in an active/passive sense22:01
SamYapleits not about loadbalancing22:01
*** ginsul has quit IRC22:01
gunixSamYaple | or scale out the number of nodes you have ahndling dhcp/l3 traffic, sure22:02
gunixthis. this is what confuses me. how do you do that? :))22:02
SamYaplejust run more agents. dhcp is active/active (more or less)22:02
SamYaplel3 agent just runs what its told22:02
SamYapleand that might be an active/apssive router22:02
SamYapleto statethis simply if routerA holds this vip and it dies, all active connections die. *however* routerB will pick up the vip and TCP will retry22:05
SamYapleudp is lost22:05
SamYaplel7 never gets involved22:05
gunixyea so that means routerA will have the load and routerB will have 0 load. and when routerA dies, routerB takes over all the load. and during this time routerC does nothing.22:05
SamYapleyup22:06
gunixso no load balancing22:06
SamYaplecrrect22:06
SamYaplenow. if a loadbalance is listeningon the vip, it can do loadbalncing... butonly on the node with theactive vip22:06
*** st4ckgneer has quit IRC22:07
*** jackNemrod has quit IRC22:07
SamYaplewhat you arereally askingabout isDVR, each compute can masquerade as a router and send traffic out as the "gateway" without a central router. but that only applies toEGRESS traffic, ingress still has to go through the since router22:07
SamYapleand you *really* dont want tomess withDVR this early (or for only 3 computenodes)22:08
gunixso all you can do if you have lots of VXLAN networks is have some dedicated nodes to do only L3....22:08
SamYaplethat works yea22:08
SamYapletheguides call that a "network node"22:08
gunixbecause the load will never be on the DHCP agent, it doesn't help if it's active/active. you need the routers.22:08
SamYaplewhen an instance sends a dhcp request, all dhcp nodes will respond, the first one the instance recieves wins22:09
SamYaplebut beyond that, they dont do anything22:09
SamYapleexcept maybe handle metdata depending on configuration22:09
gunixfrom basic networking knowledge, if you have multiple dhcp servers on a network, it will have a bad outcome22:10
gunixbut i guess neutron has a way to deal with that22:11
SamYapleall the leases are stored in theshared database, so all dhcp are assigning the same addresses22:13
SamYapleyes, its handled22:13
gunixnice22:13
*** rcernin has joined #openstack22:14
gunixthank you for the explaining this. you are really nice, as always. i'm going to sleep, it's 00:14 here. nn22:14
SamYaplehaha hear that iggy. im nice22:16
*** st4ckgneer has joined #openstack22:17
*** SmearedBeard has quit IRC22:17
*** st4ckgneer has joined #openstack22:18
gunixyes, in an openstack community way. you are very linux. the linux type of nice.22:20
SamYaplegunix: i work with iggy, im just joking with him22:21
iggyI don't buy it22:21
gunixi don't know about other stuff. can't assess your level of niceness within other fields22:21
*** Manor has quit IRC22:21
*** forsook has quit IRC22:21
*** Manor has joined #openstack22:21
SamYaplegunix: im more direct/blunt and fact based, im glad it comes accross to you as niceness22:21
SamYapleim happy to help22:22
*** ikhan_ has joined #openstack22:22
*** jeroen92 has quit IRC22:23
gunixthe world today works like this: if you are an idiot, you pay microsoft or vmware to tell you that you are an idiot. you tell me that for free.22:23
SamYapleman your barof niceness is super low. with the definition im the nicest person ever!22:24
gunixreally now, joke aside, i'd rather have blunt people that tell me the facts and help me in the process, than people that don't say anything.22:24
SamYaplethen say no more. im your man22:25
SamYaplemake sure you double check what im saying though22:25
SamYapleive been wrong onceor twice22:25
*** Manor has quit IRC22:26
*** forsook has joined #openstack22:27
*** ffiore has joined #openstack22:31
*** ffiore has quit IRC22:31
*** geaaru_ has joined #openstack22:31
*** geaaru has quit IRC22:33
*** ffiore has joined #openstack22:36
*** dnavale has joined #openstack22:37
*** dneary has quit IRC22:40
*** TMM has joined #openstack22:47
*** TMM has joined #openstack22:47
*** daynaskully has joined #openstack22:50
*** Calvin` has quit IRC22:53
*** charcol has joined #openstack22:53
*** ffiore_ has joined #openstack22:53
*** ffiore has quit IRC22:54
*** Yarboa has quit IRC22:54
*** ffiore_ has quit IRC22:57
*** Calvin` has joined #openstack22:59
*** Manor has joined #openstack23:03
*** dtrainor has quit IRC23:05
*** Manor has quit IRC23:06
*** Manor has joined #openstack23:07
*** Manor has quit IRC23:11
*** sshnaidm|off has quit IRC23:11
*** smccarth_ has joined #openstack23:16
gunixSamYaple: i just read something and it confuses me. it's about snapshots on ceph. so normally in virtualization (vmware .vmdk, .qcow2, lvm), when you snapshot your VM, you get a delta file (or delta thin volume for LVM) where your changes are tracked. if you deleted the snapshot, the delta gets removed and you have your system back ... happens really fast.23:17
gunixand than i found this: Note Rolling back an image to a snapshot means overwriting the current version of the image with data from a snapshot. The time it takes to execute a rollback increases with the size of the image. It is faster to clone from a snapshot than to rollback an image to a snapshot, and it is the preferred method of returning to a pre-existing state.23:17
gunixso in ceph, you have the snapshot (read only info) copied on top of the delta file, and the original removed, instead of the other way around? what?23:17
gunixhow can this provide performance in any way?23:18
*** smccarthy has quit IRC23:19
SamYaplei think there may be a bit of misunderstand with the way ceph works23:22
SamYapleits not so much a delta23:22
SamYapleso ceph tracks the list ofblocks, a snapshot freezes that list23:22
SamYapleanything using that snapshot will maintain its list of blocks it modifies after the snapshot23:22
SamYaplebut you wouldnt really need to rollback at that point23:23
SamYaplesnapshots in ceph arekind of a tricky subject depending on what we aretalking about23:23
gunixSamYaple: so when snapshot is active, it uses the written area from new blocks and unmodified area from snapshot blocks, having a map of how to find data? (i'm asking because that's how qcow2 snapshot works(23:32
*** smccarth_ has quit IRC23:32
*** BenderRodriguez has joined #openstack23:33
*** smccarthy has joined #openstack23:33
SamYapleyea23:33
gunixgood. than on revert to snapshot, why not just delete new blocks and use all old blocks?23:34
SamYaplequite a bitmore effeicently than qcow2 snapshotting because ceph, but general idea yea23:34
SamYaplemy question is... why revert to asnapshot?23:34
SamYaplewhat are you trying to do23:34
gunixyou have a VM, you need to upgrade some service, you create a snapshot, upgrade fails, you revert to snapshot23:34
SamYapleah. snapshots dont worklike that in openstack23:35
SamYaplea "snapshot" is going to be anew glance image23:35
SamYapleyou would boot from your snapshot (ie a new instance)23:35
gunixdude, i am talking about cinder snapshots23:36
gunixyou can do volume snapshots for cinder volumes23:37
*** smccarthy has quit IRC23:38
gunixyou also have the option of creating backup, which does a snapshot of the volume and than moves it over to swift. however i have no ideea how this happens when ceph is involed, as blocks storage for cinder AND object storage for glance AND object storage for cinder backups (cinder backup driver ceph)23:38
gunixthis process seems really easy to understand when using iscsi target cinder, with lvm backend, and swift as backend for glance.23:39
gunixtbh i like this setup more, maybe because i understand it ... however it implies propriatary storage OR lvm storage (no HA so equally bad) ... and the idea would be to walk the open source trend.23:40
SamYapleah for cinder this isbecause of cephs internal snapshot stuff23:40
SamYapleperformance is about equal when using read-only snapshots. if you are trying to write to the basevolume of a snapshot performance tanks23:40
*** TMM has quit IRC23:41
SamYaplecinder does have the ability to call flatten_snapshots,which brings back performance at the cost ofdupliucating that data (flattening)23:41
gunixyea, and that internal stuff is confusing. i am trying to understand ceph because it has huge hype and everybody loves it ... because it scales cool23:41
gunixdo flatten snapshots just duplicate the data?23:41
masbergood morning all, I am having issues creating instances, status stays as BUILD for a while and then fails. This is the nova-conductor logs http://paste.openstack.org/raw/626103/23:41
gunixinstead of doing a delta?23:41
SamYaplegunix: correct23:42
gunixhow can that be better performance?23:42
SamYaplenow we are getting into ceph internals23:42
SamYaplemasber: check compute nodes are up, and have resources available23:42
gunixmasber: NoValidHost: No valid host was found. There are not enough hosts available.23:42
gunixmasber: do you have compute nodes?23:43
masberI have 3 compute nodes23:43
masberand no instances running23:43
gunixSamYaple: i should sleep but i keep reading stuff. last question (i hope) ... what do you think about this new thing with scaleio, providing better performance than ceph for block storage?23:44
gunixmasber: check if nova-compute is running on the 3 compute nodes. do a hypervizor list from openstack api23:44
masberthe instances gets a node which means scheduler filters are passed23:44
gunix*hypervisor list23:44
masberlooks good, http://paste.openstack.org/show/626104/23:45
gunixmasber: if hosts are looking good from openstack api, please check quota on the project23:45
SamYaplegunix: i think its really easy to provide better performance than ceph forblock storage. but its hard to provide the data resilancy that ceph can. and i also use radosgw and cephfs23:45
SamYapleif you onyl use rbd, you have lots ofoptions23:46
SamYapleceph has never been the end-all-be-all of performance23:46
SamYapleand never will be23:46
gunixmasber: dude, that host has 1.5 TB memory?23:46
gunixand 192 CPUs??23:46
SamYaplemasber: `nova service-list`23:46
masberquota is ok, and instance flavor only needs 1 cpu23:46
SamYaplemasber: actually just reread your first paste23:47
SamYapleitsscheduling, but failing onall your computes23:47
SamYaplelook at the nova-computelogs23:47
masbergunix, 3 hosts, 512 GB ram each ... 5x cores23:47
gunixnice23:47
gunixSamYaple: are there any open source alternatives that provide HA block storage for cinder?23:48
SamYaplegunix: maybe glusterfs with file based? dont really keep up. im pretty hardcore into ceph because its a one stop shop for all my storage needs23:49
SamYapleand nothing else does tahtfor me23:49
*** gin0 has joined #openstack23:49
*** gin0 has joined #openstack23:49
gunixSamYaple: that glusterfs vs ceph war has been going for years and i still didn't find any good benchmark23:50
SamYapleeh and you probably wont? they are both redhat23:51
SamYaplei dont like glusterfs23:51
SamYapleitdoesnt scale past 1PB and performance is not amazing23:51
gunixanyway it's not part of openstack ansible so using it would mean too much stress23:52
gunixi think the easiest way to get high performance block storage is to just get netapp23:53
masberlogs from compute node --> http://paste.openstack.org/raw/626105/23:53
gunixprobably emc does a good job, but we have support from EMC for storage and backup and it's the most horrible support i ever got in my life23:53
SamYapledid you discover_hosts with nova-manage?23:53
*** threestrands has quit IRC23:53
SamYaplegunix: for the price of a netapp i can assureyou i get betterperformance out of ceph23:54
SamYapleif you have more money than manpower, itmight make sense23:54
gunix:D23:54
*** bobh has quit IRC23:54
gunixwhat do you think about scaleio?23:54
masberSamYaple, I didn't discover_hosts, shall I?23:54
gunixmasber: do you have anything to lose on that setup? :D23:55
*** jonaspaulo has quit IRC23:55
SamYaplemasber: nova-manage cell_v2 discover_hosts --verbose23:55
gunixmasber: did you try openstack-ansible ? :D23:55
SamYaplemasber: assuming you have done the rest of the cells_v2 correctly23:55
masbergunix, I have 0 instances. I just redeployed openstack becuase of this problem, but redploying doesnt fix anything23:55
masberI am not worry about loosing data but why this is happening I would like to understand23:56
SamYaplemasber: you have to discover_hosts when adding newcomputes23:56
SamYapleyaycellsv223:56
*** TMM has joined #openstack23:56
gunixSamYaple: i saw those videos about scaleio providing 4 times better performance than ceph and i think they did soemthing to the metris to show off.. i don't think it provides 4x better performance23:57
masbergunix, I use kolla-ansible which has been working fine till now. I started having this issue after upgrading the nic drivers on all hosts23:57
gunixmasber: if you don't have anything important on the setup, you can just do trial & error until you find the issue. if you can connect this to the driver update, i suggest reverting to old driver just for a test23:58
SamYaplemasber: what does `nova server-list` say?23:58
*** Cybodog has joined #openstack23:58

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!