*** whoami-rajat has joined #openstack-manila | 01:19 | |
*** irclogbot_1 has quit IRC | 01:44 | |
*** lpetrut has joined #openstack-manila | 03:49 | |
*** lpetrut has quit IRC | 04:21 | |
*** pcaruana has joined #openstack-manila | 04:49 | |
*** pcaruana has quit IRC | 04:55 | |
*** e0ne has joined #openstack-manila | 05:31 | |
*** e0ne has quit IRC | 05:39 | |
*** e0ne has joined #openstack-manila | 05:56 | |
*** e0ne has quit IRC | 06:04 | |
*** lpetrut has joined #openstack-manila | 06:08 | |
*** pcaruana has joined #openstack-manila | 06:30 | |
*** kopecmartin|off is now known as kopecmartin | 06:50 | |
*** tosky has joined #openstack-manila | 07:19 | |
*** e0ne has joined #openstack-manila | 07:24 | |
*** e0ne has quit IRC | 07:52 | |
*** e0ne has joined #openstack-manila | 08:36 | |
openstackgerrit | Nir Gilboa proposed openstack/manila-tempest-plugin master: Scenario test: Create/extend share and write data https://review.openstack.org/531568 | 08:40 |
---|---|---|
*** zigo_ has joined #openstack-manila | 08:46 | |
*** zigo_ is now known as zigo | 08:50 | |
*** tbarron_ has joined #openstack-manila | 09:13 | |
*** e0ne has quit IRC | 09:27 | |
*** e0ne has joined #openstack-manila | 09:34 | |
*** e0ne has quit IRC | 10:06 | |
*** e0ne has joined #openstack-manila | 10:13 | |
*** e0ne has quit IRC | 11:02 | |
*** e0ne has joined #openstack-manila | 11:09 | |
*** carloss has joined #openstack-manila | 11:17 | |
*** e0ne has quit IRC | 11:46 | |
*** pcaruana has quit IRC | 11:47 | |
*** e0ne has joined #openstack-manila | 11:51 | |
*** e0ne has quit IRC | 11:53 | |
*** pcaruana has joined #openstack-manila | 12:38 | |
*** mmethot has joined #openstack-manila | 12:45 | |
*** enriquetaso has joined #openstack-manila | 12:59 | |
*** dviroel has joined #openstack-manila | 13:26 | |
*** whoami-rajat has quit IRC | 13:28 | |
*** pcaruana has quit IRC | 13:31 | |
*** pcaruana has joined #openstack-manila | 13:36 | |
*** whoami-rajat has joined #openstack-manila | 13:38 | |
*** eharney has joined #openstack-manila | 13:52 | |
*** jmlowe has quit IRC | 13:52 | |
*** luizbag has quit IRC | 14:07 | |
*** lpetrut has quit IRC | 14:09 | |
*** kaisers_ has joined #openstack-manila | 14:10 | |
*** jmlowe has joined #openstack-manila | 14:28 | |
*** kaisers_ is now known as kaisers_away | 14:36 | |
*** kaisers_away is now known as kaisers_ | 14:54 | |
*** esker has joined #openstack-manila | 15:12 | |
*** e0ne has joined #openstack-manila | 15:14 | |
*** esker has quit IRC | 15:22 | |
*** kaisers_ is now known as kaisers_away | 15:57 | |
*** kaisers_away is now known as kaisers_ | 16:00 | |
*** kaisers_ is now known as kaisers_away | 16:00 | |
*** e0ne has quit IRC | 16:02 | |
gouthamr | Darkl0rd!!780135 | 16:03 |
* gouthamr mondays | 16:04 | |
vkmc | :o | 16:04 |
* vkmc tries gouthamr's bank account | 16:04 | |
*** e0ne has joined #openstack-manila | 16:05 | |
gouthamr | vkmc: :P i secure that little better than IRC, try passw0rd_dumbledoreIsNotDeaD.com | 16:05 |
bswartz | gouthamr: spoiler alert! | 16:05 |
vkmc | gouthamr, nooooo | 16:05 |
gouthamr | bswartz vkmc: he's not, how could you believe it :( | 16:06 |
gouthamr | wait | 16:06 |
*** erlon has joined #openstack-manila | 16:16 | |
*** e0ne has quit IRC | 16:17 | |
*** kaisers_away is now known as kaisers_ | 16:36 | |
*** kaisers_ is now known as kaisers_away | 16:37 | |
*** e0ne has joined #openstack-manila | 16:46 | |
*** ociuhandu has quit IRC | 16:54 | |
*** luizbag has joined #openstack-manila | 17:08 | |
*** erlon has quit IRC | 17:13 | |
*** erlon has joined #openstack-manila | 17:15 | |
*** kopecmartin is now known as kopecmartin|off | 17:16 | |
*** e0ne has quit IRC | 17:16 | |
*** erlon has quit IRC | 17:31 | |
*** lseki has joined #openstack-manila | 17:37 | |
*** luizbag has quit IRC | 17:38 | |
*** kaisers_away is now known as kaisers_ | 18:06 | |
*** kaisers_ has quit IRC | 18:11 | |
*** patrickeast has quit IRC | 18:19 | |
*** dviroel has quit IRC | 18:19 | |
*** dviroel has joined #openstack-manila | 18:20 | |
*** carloss has quit IRC | 18:20 | |
*** patrickeast has joined #openstack-manila | 18:20 | |
*** amito has quit IRC | 18:20 | |
*** amito has joined #openstack-manila | 18:20 | |
*** carloss has joined #openstack-manila | 18:21 | |
*** erlon has joined #openstack-manila | 18:33 | |
*** e0ne has joined #openstack-manila | 18:33 | |
*** luizbag has joined #openstack-manila | 18:40 | |
*** thgcorrea has joined #openstack-manila | 18:40 | |
*** e0ne has quit IRC | 18:55 | |
*** jmlowe has quit IRC | 18:56 | |
*** luizbag has quit IRC | 19:59 | |
*** jmlowe has joined #openstack-manila | 20:00 | |
*** thgcorrea has quit IRC | 20:05 | |
*** whoami-rajat has quit IRC | 20:08 | |
*** pcaruana has quit IRC | 20:25 | |
*** eharney has quit IRC | 20:30 | |
openstackgerrit | Carlos Eduardo proposed openstack/manila master: DNM - Fix manila pagination speed https://review.openstack.org/650986 | 20:38 |
lseki | hi folks | 21:13 |
lseki | Could someone give me an advice investigating this bug? https://bugs.launchpad.net/manila/+bug/1804208 | 21:13 |
openstack | Launchpad bug 1804208 in Manila "scheduler falsely reports share service down" [High,Triaged] - Assigned to Lucio Seki (lseki) | 21:13 |
lseki | it's a bug reported by carthaca. Seems that under a heavy load environment, manila-share service is listed as `down` | 21:14 |
lseki | but it appears as `up` all the time, even when `_update_host_state_map` is reporting `Share service is down` | 21:17 |
lseki | it only appears as `down` when I restart manila-share service, while exporting the shares and it takes too long | 21:18 |
lseki | my guess is to perform live checks while re-exporting the shares under `ShareManager#ensure_driver_resources`, but I'm not sure if it's a good idea... | 21:23 |
tbarron_ | lseki: i think carthaca intended to report a steady-state scale issue, not a restart issue | 21:31 |
tbarron_ | lseki: both would be legitimate issues, but it's less surprising that the service would show down in a restart :) | 21:31 |
tbarron_ | lseki: if there's only one instance of the service and it's down during a restart I'd think that's expected | 21:32 |
lseki | tbarron_ > there's only one instance of the service and it's down during a restart | 21:33 |
lseki | yeah that's exactly what's happening | 21:33 |
lseki | so seems that I couldn't reproduce the error yet | 21:33 |
tbarron_ | lseki: today the manila-share service runs active-passive and you'd need more than a devstack setup to run mutliple instances under pacemaker and even then it may be "down" for a brief period | 21:33 |
tbarron_ | lseki: right, carthaca is claiming that at SAP the service shows as down (from the vantage of the scheduler) w/o any manila-share failover, just due to scale issues | 21:34 |
tbarron_ | lseki: now it may be that they are running older manila version and that we don't have the bug anymore but | 21:34 |
tbarron_ | lseki: i kinda doubt it, I don't think we've changed much in that area | 21:35 |
tbarron_ | lseki: so there's an interesting detective problem here, how to reproduce | 21:35 |
tbarron_ | lseki: some of our mutual downstream customer want to run about a hundred "edge" manila-share (and cinder-volume) services | 21:36 |
tbarron_ | lseki: from a core of three manila-api and scheduler services | 21:36 |
tbarron_ | lseki: so that's the scale issue, will the services show as down just b/c of this scale? even without any failover? | 21:37 |
tbarron_ | gouthamr: no rush, but the gating for csi features in next-gen okd is painful so | 21:38 |
tbarron_ | gouthamr: i've backed off for now to pure k8s | 21:38 |
tbarron_ | gouthamr: running one master and three nodes i'm running a minimal | 21:39 |
lseki | tbarron_: hmm seems more complex than I thought | 21:39 |
tbarron_ | gouthamr: `manila+keystone+rabbit+myself on one of the nodes | 21:39 |
tbarron_ | lseki: maybe not, but i think it's a steady state scale issue that carthaca is targeting, not a failover issue | 21:39 |
gouthamr | tbarron_: ack, nice... local deployment? with kubeadm? | 21:40 |
tbarron_ | gouthamr: yup, https://paste.fedoraproject.org/paste/bk9yQyUXB36n~EendYxinA | 21:42 |
gouthamr | tbarron_: sweet! | 21:42 |
tbarron_ | gouthamr: running devstack on a worker, would you recommend on the master or on an independent node instead? | 21:42 |
tbarron_ | gouthamr: what's the better test/demo? | 21:43 |
tbarron_ | gouthamr: this is local on libvirt but we might be bable to do it on openstack | 21:43 |
gouthamr | tbarron_: i used the dsvm as the master; the only issue being the etcd clash | 21:43 |
*** whoami-rajat has joined #openstack-manila | 21:45 | |
tbarron_ | gouthamr: maybe they fixed devstack so that etcd isn't always on, i picked the worker for that reason | 21:45 |
tbarron_ | gouthamr: but devstack didn't start it | 21:46 |
tbarron_ | gouthamr: I thought it used to no mattter what | 21:46 |
tbarron_ | gouthamr: local.conf has: ENABLED_SERVICES=mysql,rabbit,keystone | 21:47 |
gouthamr | tbarron_: yes, you can "disable_service etcd3" i think | 21:47 |
tbarron_ | plus the plugin | 21:47 |
tbarron_ | for manila and ceph | 21:47 |
tbarron_ | gouthamr: so given that we can run without it anyways, i'm wondering whether running on a worker or master is better to "prove" service decoupling | 21:48 |
tbarron_ | gouthamr: or maybe just another VM | 21:48 |
tbarron_ | gouthamr: theoretically I don't think it matters, maybe just dramatically :) | 21:48 |
gouthamr | tbarron_: true, as long as the AUTH_URL is reachable, it should work | 21:49 |
lseki | Alright, now I know that being "down" for a while after restart is not a problem (at least not the reported issue), and I didn't manage to reproduce his issue yet. | 22:02 |
lseki | tbarron_: thanks, I'll keep trying to figure out how to reproduce the issue | 22:03 |
*** erlon has quit IRC | 22:05 | |
*** dviroel has quit IRC | 22:56 | |
tbarron_ | lseki++ | 23:35 |
*** whoami-rajat has quit IRC | 23:58 | |
*** tosky has quit IRC | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!