Tuesday, 2019-01-08

*** macza has quit IRC00:06
*** Liang__ has joined #openstack-meeting00:06
*** tetsuro has joined #openstack-meeting00:19
*** munimeha1 has quit IRC00:30
*** Swami has joined #openstack-meeting00:38
*** ircuser-1 has joined #openstack-meeting00:43
*** Swami has quit IRC00:59
*** yamamoto has joined #openstack-meeting01:02
*** bswartz has quit IRC01:03
*** bswartz has joined #openstack-meeting01:53
*** macza has joined #openstack-meeting01:55
*** jamesmcarthur has joined #openstack-meeting01:58
*** jamesmcarthur has quit IRC02:03
*** bobh_ has joined #openstack-meeting02:04
*** bobh_ has quit IRC02:08
*** tetsuro has quit IRC02:16
*** tetsuro has joined #openstack-meeting02:28
*** macza has quit IRC02:30
*** armax has quit IRC02:31
*** hongbin has joined #openstack-meeting02:37
*** psachin has joined #openstack-meeting02:41
*** mhen has quit IRC02:59
*** mhen has joined #openstack-meeting03:02
*** yamamoto has quit IRC03:05
*** apetrich has quit IRC03:16
*** whoami-rajat has joined #openstack-meeting03:16
*** yamamoto has joined #openstack-meeting03:22
*** yamamoto has quit IRC03:27
*** yamamoto has joined #openstack-meeting03:27
*** macza has joined #openstack-meeting03:32
*** macza has quit IRC03:36
*** tashiromt has joined #openstack-meeting03:52
*** hongbin has quit IRC03:57
*** imsurit has joined #openstack-meeting03:58
*** ShilpaSD has joined #openstack-meeting04:10
*** awaugama has quit IRC04:20
*** bzhao__ has quit IRC04:26
*** rfolco has quit IRC04:34
*** rfolco has joined #openstack-meeting04:34
*** yamamoto has quit IRC04:36
*** rf0lc0 has joined #openstack-meeting04:37
*** rfolco has quit IRC04:39
*** rf0lc0 has quit IRC04:40
*** rfolco has joined #openstack-meeting04:41
*** dims has quit IRC04:47
*** dims has joined #openstack-meeting04:48
*** sridharg has joined #openstack-meeting04:51
*** dims has quit IRC04:56
*** dims has joined #openstack-meeting04:56
*** bobh_ has joined #openstack-meeting05:19
*** bobh_ has quit IRC05:26
*** radeks has joined #openstack-meeting05:48
*** dkushwaha has joined #openstack-meeting05:56
*** imsurit has quit IRC06:07
*** radeks has quit IRC06:11
*** radeks has joined #openstack-meeting06:20
*** alex_xu has quit IRC06:23
*** gyee has quit IRC06:24
*** Liang__ has quit IRC06:24
*** Liang__ has joined #openstack-meeting06:30
*** imsurit has joined #openstack-meeting06:31
*** yamamoto has joined #openstack-meeting06:37
*** yamamoto has quit IRC06:42
*** Luzi has joined #openstack-meeting06:44
*** tetsuro has quit IRC06:51
*** bzhao__ has joined #openstack-meeting06:58
*** rcernin has quit IRC06:58
*** tashiromt has quit IRC07:08
*** imsurit has quit IRC07:31
*** imsurit has joined #openstack-meeting07:40
*** pcaruana has joined #openstack-meeting07:42
*** jbadiapa has joined #openstack-meeting07:43
*** dtrainor has quit IRC07:45
*** dtrainor has joined #openstack-meeting07:46
*** slaweq has joined #openstack-meeting07:50
*** slaweq has quit IRC07:53
*** slaweq has joined #openstack-meeting07:55
*** phuoc_ has joined #openstack-meeting07:59
*** kopecmartin|off is now known as kopecmartin08:00
dkushwaha#startmeeting tacker08:02
openstackMeeting started Tue Jan  8 08:02:08 2019 UTC and is due to finish in 60 minutes.  The chair is dkushwaha. Information about MeetBot at http://wiki.debian.org/MeetBot.08:02
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.08:02
*** openstack changes topic to " (Meeting topic: tacker)"08:02
openstackThe meeting name has been set to 'tacker'08:02
dkushwaha#topic Roll Call08:02
*** openstack changes topic to "Roll Call (Meeting topic: tacker)"08:02
dkushwahawho is here for Tacker weekly meeting?08:02
*** bobh_ has joined #openstack-meeting08:02
dkushwahahi phuoc_08:03
phuoc_dkushwaha, good afternoon :)08:04
dkushwahaGood afternoon :)08:04
dkushwahaphuoc_,  how was your vocation ?08:05
phuoc_I have some drinks, but I am looking for lunar new year :)08:06
*** jungleboyj has quit IRC08:06
*** hadai has joined #openstack-meeting08:06
*** jungleboyj has joined #openstack-meeting08:06
phuoc_I think we should wait or ping other guys08:07
*** bobh_ has quit IRC08:07
dkushwahayea, lets wait for few more minutes08:07
dkushwahaI hope all members are still on holidays :)08:11
phuoc_dkushwaha, I think so08:11
*** joxyuki has joined #openstack-meeting08:12
*** priteau has joined #openstack-meeting08:12
dkushwahaphuoc_, Do you have something for discussion?08:12
dkushwahajoxyuki, howdy08:13
phuoc_I have some problem with SFC and NSH, I am trying to fix that if it is possible08:13
phuoc_joxyuki, hi08:14
*** imsurit has quit IRC08:14
*** ssbarnea has joined #openstack-meeting08:14
dkushwahajoxyuki, I have seen https://review.openstack.org/#/c/609610/08:15
dkushwahaand it looks fine to me.08:15
*** ssbarnea|bkp2 has quit IRC08:16
phuoc_LGTM, but I think we should wait https://review.openstack.org/#/c/62894008:16
dkushwahabut before moving ahead on coding patch, we should fridge spec https://review.openstack.org/#/c/561840/08:16
phuoc_and rebase that patch08:16
joxyukidkushwaha, thanks. But I have to update the spec https://review.openstack.org/#/c/561840/08:16
*** e0ne has joined #openstack-meeting08:16
*** aojea has joined #openstack-meeting08:16
dkushwahajoxyuki, yea, so could you please update the same08:17
joxyukiYes, I have to add some info according to the patch, https://review.openstack.org/#/c/622888/08:18
dkushwahajoxyuki, ok08:20
dkushwaha#topic VNF monitoring using Mistral08:20
*** openstack changes topic to "VNF monitoring using Mistral (Meeting topic: tacker)"08:20
dkushwaha#link https://review.openstack.org/#/c/486924/08:21
phuoc_I am waiting the update for that patch08:21
dkushwahaI am still hunting on this patch for hardcoded values, and not getting suitable for.08:21
dkushwaha* and not getting suitable way for that.08:22
dkushwahaI wish, I will update it in couple of days08:22
*** ralonsoh has joined #openstack-meeting08:23
dkushwahamoving ahead..08:24
dkushwaha#topic force-delete resources08:24
*** openstack changes topic to "force-delete resources (Meeting topic: tacker)"08:24
dkushwaha#link https://review.openstack.org/#/c/602528/08:24
dkushwahaphuoc_, joxyuki could you please review spec.08:25
phuoc_yes, I will spend time to look at it08:25
joxyukiyes, me too08:25
dkushwahaphuoc_, Regarding your SFC & NSH patch08:28
phuoc_dkushwaha, yes08:28
dkushwahado you have some docs related to that, I needs to look into it for deeper understanding08:29
dkushwahathat will be very helpful08:30
phuoc_I don't have offical document, I am trying to look at NSH guide in ovs documents08:31
phuoc_when I test SFC, only mpls works08:31
phuoc_with enabling NSH, it is not working, I will send you some document if I figure it out :)08:32
dkushwahaphuoc_, Thanks, that will be helpful for me08:33
dkushwaha#topic Open Discussion08:33
*** openstack changes topic to "Open Discussion (Meeting topic: tacker)"08:33
dkushwahawe are in Stein-2, so lets try to finish spec patchs08:35
dkushwahaDo we have something more to discuss today ?08:36
phuoc_nothing from me08:36
dkushwahaok, Thanks phuoc_ joxyuki08:37
dkushwahaClosing this meeting for now08:37
phuoc_bye :)08:37
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"08:37
openstackMeeting ended Tue Jan  8 08:37:32 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)08:37
openstackMinutes:        http://eavesdrop.openstack.org/meetings/tacker/2019/tacker.2019-01-08-08.02.html08:37
*** joxyuki has left #openstack-meeting08:37
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/tacker/2019/tacker.2019-01-08-08.02.txt08:37
openstackLog:            http://eavesdrop.openstack.org/meetings/tacker/2019/tacker.2019-01-08-08.02.log.html08:37
*** evrardjp_ has joined #openstack-meeting08:48
*** evrardjp__ has joined #openstack-meeting08:49
*** tetsuro has joined #openstack-meeting08:51
*** hadai has quit IRC08:51
*** evrardjp has quit IRC08:51
*** evrardjp_ has quit IRC08:52
*** evrardjp__ has quit IRC08:53
*** imsurit has joined #openstack-meeting08:58
*** evrardjp has joined #openstack-meeting09:03
*** dkushwaha has left #openstack-meeting09:14
*** bobh_ has joined #openstack-meeting09:15
*** Liang__ has quit IRC09:20
*** apetrich has joined #openstack-meeting09:26
*** carlos_silva has joined #openstack-meeting09:59
*** erlon has joined #openstack-meeting10:05
*** priteau has quit IRC10:17
*** bobh_ has quit IRC10:19
*** tetsuro has quit IRC10:23
*** lpetrut has joined #openstack-meeting10:26
*** kopecmartin is now known as kopecmartin|afk10:35
*** vishalmanchanda has joined #openstack-meeting10:43
*** priteau has joined #openstack-meeting10:44
*** liuyulong has joined #openstack-meeting10:54
*** liuyulong has quit IRC10:55
*** liuyulong has joined #openstack-meeting10:56
*** yamamoto has joined #openstack-meeting11:02
*** yamamoto has quit IRC11:11
*** imsurit has quit IRC11:11
*** yamamoto has joined #openstack-meeting11:23
*** lpetrut has quit IRC11:26
*** yamamoto has quit IRC11:29
*** yamamoto has joined #openstack-meeting12:00
*** dangtrinhnt_x has joined #openstack-meeting12:05
*** dangtrinhnt_x has quit IRC12:40
*** raildo has joined #openstack-meeting12:51
*** yamamoto has quit IRC13:00
*** yamamoto has joined #openstack-meeting13:02
*** yamamoto has quit IRC13:04
*** dmacpher has quit IRC13:09
*** jrbalderrama has joined #openstack-meeting13:18
*** liuyulong_ has joined #openstack-meeting13:22
*** liuyulong has quit IRC13:28
*** zigo has joined #openstack-meeting13:30
*** psachin has quit IRC13:33
*** trident has quit IRC13:41
*** jamesmcarthur has joined #openstack-meeting13:42
*** trident has joined #openstack-meeting13:42
*** jhesketh has quit IRC13:45
*** jhesketh has joined #openstack-meeting13:47
*** trident has quit IRC14:03
*** trident has joined #openstack-meeting14:03
*** kopecmartin|afk is now known as kopecmartin14:06
*** bobh_ has joined #openstack-meeting14:16
*** bobh_ has quit IRC14:21
*** jamesmcarthur has quit IRC14:28
*** priteau has quit IRC14:33
*** mjturek has joined #openstack-meeting14:37
*** Luzi has quit IRC14:45
*** jamesmcarthur has joined #openstack-meeting14:48
*** jamesmcarthur has quit IRC14:50
*** jamesmcarthur has joined #openstack-meeting14:50
*** liuyulong_ has quit IRC14:51
*** vishalmanchanda has quit IRC14:53
*** awaugama has joined #openstack-meeting14:55
*** shintaro has joined #openstack-meeting15:01
*** r-mibu has joined #openstack-meeting15:06
*** markvoelker has joined #openstack-meeting15:23
*** markvoelker has quit IRC15:26
*** mlavalle has joined #openstack-meeting15:29
*** armax has joined #openstack-meeting15:41
*** dmacpher has joined #openstack-meeting15:49
*** shintaro has quit IRC15:55
*** e0ne has quit IRC15:56
*** bobh has joined #openstack-meeting15:59
slaweq#startmeeting neutron_ci16:00
openstackMeeting started Tue Jan  8 16:00:11 2019 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: neutron_ci)"16:00
openstackThe meeting name has been set to 'neutron_ci'16:00
*** vishakha has joined #openstack-meeting16:01
*** hongbin has joined #openstack-meeting16:01
slaweqhi hongbin16:01
*** jrbalderrama has quit IRC16:01
slaweqmlavalle: ping :)16:01
slaweqhi mlavalle16:02
mlavallesorry, distracted looking at code16:02
slaweqI think we can start now16:02
mlavallethanks for pinging me16:02
slaweqhaleyb: bcafarel and njohnson will not be available probably16:02
slaweqmlavalle: no problem :)16:02
slaweq#topic Actions from previous meetings16:02
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)"16:02
mlavallethey are not getting paid, right?16:02
slaweqwhy? :)16:03
mlavallethey didn't show up today16:03
mlavallejust kidding16:03
slaweqthey are on internal meetup this week16:03
slaweqso they are busy16:03
mlavalleis that all week long?16:03
slaweqtuesday to thursday16:04
mlavalleso I know not to bother them16:04
slaweqok, so lets go with actions from last year then :)16:04
slaweqmlavalle will continue debugging trunk tests failures in multinode dvr env16:04
*** dmacpher has quit IRC16:05
mlavalleI continued working on it. Yesterday I wrote an update in the bug: https://bugs.launchpad.net/neutron/+bug/1795870/comments/816:05
openstackLaunchpad bug 1795870 in neutron "Trunk scenario test test_trunk_subport_lifecycle fails from time to time" [High,In progress] - Assigned to Miguel Lavalle (minsel)16:05
*** dmacpher has joined #openstack-meeting16:05
slaweqyes, I saw this comment yesterday16:06
mlavalleBased on the evidence that I list in that note, it seems to me the dvr router is not being scheduled in one of the hosts16:06
mlavallein this case the controller16:06
mlavallethe router in question doesn't show up at all in the le agent log in that host16:06
slaweqso vm which were on controller wasn't reachable?16:06
*** bobh has quit IRC16:06
slaweqor vm on subnode wasn't reachable?16:06
mlavalleso I am looking at the router scheduling code16:07
mlavallethe one in the controller16:07
slaweqahh, ok16:07
slaweqso that we can rule out network connectivity between nodes as the reason16:08
mlavalleyes, we can rule that out16:08
mlavalleI know that because other tests run16:08
*** baojg has joined #openstack-meeting16:09
slaweqyes, but it could be that e.g. tests where vm was spawned on subnode were failed and tests where vm was spawned on controller were passing - then You couldn't rule that out16:09
slaweqbut in such case if vm on controller didn't work, it isn't underlay network issue16:10
mlavalleno, becasue there are tests where 2 vms, one in controller and one in compute, are passing16:10
*** baojg has quit IRC16:10
slaweqyes, so we can definitely rule out issues with underlay network :)16:10
mlavallehang on16:10
*** baojg has joined #openstack-meeting16:11
mlavalleyou wrote this test: https://review.openstack.org/#/c/598676/16:11
mlavalleand the vms run in both hosts, right?16:11
mlavallethis test passes16:12
slaweqyes, they should be on separate nodes always16:12
mlavallethat test led me to https://review.openstack.org/#/c/59756716:12
mlavallewhich merged in November16:12
slaweqyes, it was big patch16:13
mlavalleand where we made significant changes to dvr scheduling16:13
mlavalleso I am looking at that code now16:13
mlavallethat is why I got late to this meeting ;-)16:13
slaweqok, so You are justified now :P16:13
mlavallethat's where I am right now16:13
mlavalleI'll continue pushing forward16:13
slaweqok, thx a lot16:14
mlavalleI might push a DNM patch to do some testing16:14
slaweqsure, let me know if You will need any help16:14
slaweq#action mlavalle will continue debugging trunk tests failures in multinode dvr env16:14
slaweqthx mlavalle for working on this16:14
*** erlon has quit IRC16:15
slaweqok, lets move on to the next one16:15
slaweqhaleyb to report bugs about recent errors in L3 agent logs16:15
mlavallecan I make one additional comment16:15
slaweqgo on16:15
mlavalleseveral of the other failures in the dvr mltinode job exhibit similar behavior....16:16
mlavallethe VM cannot reach the metadata service16:16
mlavalleso this might be the commong cause of several of the failures trying to ssh to instances16:16
slaweqyes, I also think that, I saw some other tests with similar issues, it's not always same test16:17
*** anteaya has joined #openstack-meeting16:17
mlavalleso this is related to the email you sent me in december16:17
mlavallewith ssh failures16:18
*** diablo_rojo has joined #openstack-meeting16:18
* mlavalle trying to show diligence with homework assigned by El Comandante16:18
slaweqmlavalle: but it's not always dvr jobs with such failures16:18
slaweqe.g. here: * haleyb to report bugs about recent errors in L3 agent logs16:18
slaweqit is the same issue16:19
slaweqand it's single node tempest job16:19
slaweqwithout dvr16:19
mlavalleok, I'll look at that then16:19
mlavalleit is useful data16:19
*** pcaruana has quit IRC16:20
slaweqI saw in this job some errors in L3 agent logs: http://logs.openstack.org/09/626109/10/check/tempest-full/fab9ab0/controller/logs/screen-q-l3.txt.gz?level=ERROR16:20
slaweqbut I'm not sure if that is related to the issue or not16:20
slaweqok, can we move on then?16:21
mlavalleyes, I'll look a that16:22
slaweqthx mlavalle16:22
mlavallethanks for the pointers16:22
*** baojg has quit IRC16:22
slaweqYou welcome :)16:22
slaweqok, lets move on16:22
*** baojg has joined #openstack-meeting16:22
slaweqhaleyb to report bugs about recent errors in L3 agent logs16:22
slaweqhaleyb is not here16:22
slaweqbut he opened bug https://bugs.launchpad.net/neutron/+bug/180913416:23
openstackLaunchpad bug 1809134 in neutron "TypeError in QoS gateway_ip code in l3-agent logs" [High,In progress] - Assigned to Brian Haley (brian-haley)16:23
slaweqand he is working on it so we should be fine16:23
slaweqI think we can move on to the next one then16:24
slaweqslaweq to talk with bcafarel about SIGHUP issue in functional py3 tests16:24
slaweqI talked with bcafarel, he checked that and he found that this issue is now fixed with https://review.openstack.org/#/c/624006/16:24
*** e0ne has joined #openstack-meeting16:25
slaweqany questions or do You want to talk about something else related to last week's actions?16:25
slaweqor can we move on as that was all actions from last meeting16:26
slaweqok, lets move on then16:27
mlavallelet's move on16:27
slaweq#topic Python 316:27
*** openstack changes topic to "Python 3 (Meeting topic: neutron_ci)"16:27
slaweqEtherpad: https://etherpad.openstack.org/p/neutron_ci_python316:27
slaweqin this etherpad You can track progress of switching our CI to python316:28
slaweqrecently I did some patches related to it:16:28
slaweqso please review if You will have some time16:28
slaweqI’m also working slowly with conversions of neutron-functional job to py3 in https://review.openstack.org/#/c/577383/  but it isn't ready yet I think16:29
mlavalleso I can take an assignment from there16:29
slaweqin patches which I did I was doing together conversion to zuulv3 and python316:30
mlavallethat's good idea16:30
slaweqbut if You want only switch job to py3 now, it's fine too for me16:30
mlavalleand good pratice16:30
slaweqfor some grenade jobs I think njohnston was doing some patches16:31
slaweqbut I didn' check those too much recently so I am not sure what is current state of those jobs16:31
slaweqbut I think that we have good progress on it and we can switch everything in this cycle I hope :)16:32
slaweqbtw. I'm not sure if You are aware but there is also almost merged patch https://review.openstack.org/#/c/622415/16:33
slaweqto switch devstack to be py3 by default16:33
slaweqI'm not sure, but then I think all devstack based jobs should be running py3 by default16:34
mlavallelet's keep an eye on that16:34
slaweqok, can we move on to the next topic?16:35
slaweqI take it as yes16:36
*** erlon has joined #openstack-meeting16:36
slaweq#topic Grafana16:36
*** erlon has quit IRC16:36
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)"16:36
slaweqcouple notes on grafana16:37
slaweqall tempest and neutron-tempest-plugin jobs are on quite high failure rates recently16:37
slaweqand in most cases where I checked it it was this issue where vm couldn't reach metadata or issue where SSH to instance wasn't possible16:38
slaweqI know that mlavalle will continue work on first problem16:38
slaweqso I can try to debug second one16:38
*** gyee has joined #openstack-meeting16:38
slaweq#action slaweq to debug problems with ssh to vm in tempest tests16:38
mlavalleyes, I thionk it is the right approach16:38
slaweqif I will find anything I will open bug/bugs for it16:39
slaweqthere is also one more issue worth to mention, recently we had Functional tests failing 100% times, that was caused by https://bugs.launchpad.net/neutron/+bug/181051816:39
openstackLaunchpad bug 1810518 in oslo.privsep "neutron-functional tests failing with oslo.privsep 1.31" [Critical,Confirmed] - Assigned to Ben Nemec (bnemec)16:39
slaweqfor now it is workaround by lowering oslo.privsep version in requirements repo but we need to fix this issue somehow16:40
*** kopecmartin is now known as kopecmartin|off16:40
slaweqI talked with rubasov today and he has no idea how to fix this :/16:41
slaweqI also spent few hours on it today16:41
mlavallewell was worth trying16:41
slaweqI described my findings in comment on launchpad but I have no idea how to deal with it16:41
slaweqI have no any experience with ctypes lib and calling C functions from python16:42
slaweqand together with threading16:42
slaweqI will ask bnemec today if he found something maybe16:43
slaweqand we will see what to do with it next16:43
slaweqanything else You want to add/ask here?16:44
mlavalleif you get stuck, let me know16:44
slaweqmlavalle: sure, thx16:44
mlavalleif nothing else, it is a good learning opportunity16:44
slaweqyes, it is16:44
mlavallectypes, C, etc16:44
slaweqI read a lot about it today :)16:44
slaweqok, lets move on16:45
slaweq#topic fullstack/functional16:45
*** openstack changes topic to "fullstack/functional (Meeting topic: neutron_ci)"16:45
slaweqaccording to fullstack tests16:45
slaweqwe still have opened issue https://bugs.launchpad.net/neutron/+bug/179847516:45
openstackLaunchpad bug 1798475 in neutron "Fullstack test test_ha_router_restart_agents_no_packet_lost failing" [High,In progress] - Assigned to LIU Yulong (dragon889)16:45
slaweqliuyulong told me today that he couldn’t spot it recently too much. We will continue work on it16:46
slaweqabout functional tests16:46
slaweqI have small update according to https://bugs.launchpad.net/neutron/+bug/168702716:46
openstackLaunchpad bug 1687027 in neutron "test_walk_versions tests fail with "IndexError: tuple index out of range" after timeout" [High,Confirmed] - Assigned to Slawek Kaplonski (slaweq)16:46
slaweqI was checking those failed db migrations16:47
slaweqand it looks that this isn't any specific problem in Neutron migration scripts16:47
slaweqit's just issue with IO performance on some cloud providers probably16:47
mlavalleyeah, that makes it difficult to fix16:48
slaweqI described in comment in https://bugs.launchpad.net/neutron/+bug/1687027/comments/41 my findings16:48
openstackLaunchpad bug 1687027 in neutron "test_walk_versions tests fail with "IndexError: tuple index out of range" after timeout" [High,Confirmed] - Assigned to Slawek Kaplonski (slaweq)16:48
slaweqbasically in such timeouted run, every step in migration takes very long time16:48
slaweqand dstat shows me that it was belowe 100 IOPS during such test16:49
slaweqwhile during "good" test run there was more than 2000 IOPS where I was checking it16:49
mlavalleso we are hitting a limit16:49
slaweqI think that sometimes we are just on some slow nodes and that is the reason16:50
slaweqI talked with infra about it16:50
slaweqthey told me that they had some issues with one provider recently but this should be fixed now16:50
*** imacdonn has quit IRC16:50
slaweqI want to send patch which will unmark those tests as unstable16:51
mlavalleahh, so there's hope16:51
*** imacdonn has joined #openstack-meeting16:51
slaweqand try to recheck it for some time to see if that will happen again16:51
slaweqwhen I was looking at logstash few days ago it wasn't too much such issues16:51
slaweqbut we will probably have them from time to time still16:51
slaweqand that's all for functional/fullstack tests from me for today16:52
slaweqdo You want to add something?16:52
*** artom has quit IRC16:52
slaweqok, so I guess that we can go to the next topic now :)16:53
slaweq#topic Tempest/Scenario16:53
*** openstack changes topic to "Tempest/Scenario (Meeting topic: neutron_ci)"16:53
slaweqaccording to tempest/scenario tests, we already discussed about our 2 main issues there16:53
slaweqso I just wanted to ask about one thing16:53
slaweqI recently send patch https://review.openstack.org/#/c/627970/ which adds non-voting tempest job based on Fedora16:54
slaweqI wanted to ask for review and for opinions from community if it's fine for You to have such job16:54
slaweqit's running on py3 of course :)16:55
mlavalleI'm ok with it16:55
slaweqgreat mlavalle, thx :)16:55
mlavallehave we seen oppostion in other projecs?16:55
slaweqI don't know about any16:55
mlavalleI'm fine with it16:56
slaweqthx mlavalle16:56
slaweqplease add this patch to Your review list :)16:56
mlavalleI just did :-)16:57
slaweqthx :)16:57
slaweqok, so that's all from my side for today16:57
slaweqdo You want to talk about anything else quickly?16:57
slaweqok, so thx for attending mlavalle and hongbin16:58
*** artom has joined #openstack-meeting16:58
mlavallethanks for the great faciliatio, as always16:58
slaweqsee You next week16:58
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:58
openstackMeeting ended Tue Jan  8 16:58:26 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:58
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-01-08-16.00.html16:58
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-01-08-16.00.txt16:58
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_ci/2019/neutron_ci.2019-01-08-16.00.log.html16:58
*** macza has joined #openstack-meeting16:59
*** e0ne has quit IRC17:08
*** jamesmcarthur has quit IRC17:15
*** jamesmcarthur has joined #openstack-meeting17:19
*** jamesmcarthur has quit IRC17:32
*** jamesmcarthur has joined #openstack-meeting17:45
*** jamesmcarthur has quit IRC17:56
*** jamesmcarthur has joined #openstack-meeting18:02
*** ralonsoh has quit IRC18:07
*** whoami-rajat has quit IRC18:12
*** rf0lc0 has joined #openstack-meeting18:14
*** rfolco has quit IRC18:15
*** rf0lc0 is now known as rfolco18:16
*** hongbin has quit IRC18:18
*** Swami has joined #openstack-meeting18:22
*** e0ne has joined #openstack-meeting18:23
*** armax has quit IRC18:26
*** lbragstad has quit IRC18:52
*** lbragsta_ has joined #openstack-meeting18:52
*** lbragsta_ is now known as lbragstad18:55
*** sridharg has quit IRC18:59
clarkbAnyone here for the infra meeting?18:59
clarkbwe'll get started in a couple minutes18:59
clarkb#startmeeting infra19:01
openstackMeeting started Tue Jan  8 19:01:09 2019 UTC and is due to finish in 60 minutes.  The chair is clarkb. Information about MeetBot at http://wiki.debian.org/MeetBot.19:01
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.19:01
*** openstack changes topic to " (Meeting topic: infra)"19:01
openstackThe meeting name has been set to 'infra'19:01
clarkb#link https://wiki.openstack.org/wiki/Meetings/InfraTeamMeeting#Agenda_for_next_meeting19:01
*** Shrews has joined #openstack-meeting19:01
*** armax has joined #openstack-meeting19:01
clarkb#topic Announcements19:02
*** openstack changes topic to "Announcements (Meeting topic: infra)"19:02
clarkbI don't actually have any announcements19:02
clarkbwelcome back everyoen!19:02
clarkb#topic Actions from last meeting19:03
*** openstack changes topic to "Actions from last meeting (Meeting topic: infra)"19:03
clarkb#link http://eavesdrop.openstack.org/meetings/infra/2018/infra.2018-12-18-19.01.txt minutes from last meeting19:03
clarkb#link https://review.openstack.org/#/q/status:open+topic:fedora29 Fedora29 support in our toolchain19:03
clarkbthere are still a few outstanding reviews for getting fedora 29 up and running if you have time to go and review those changes and their dependencies19:03
clarkbLooking at other recent lists of actions we got the storyboard mysql changes in and next step is for fungi to boot new servers for storyboard19:04
clarkbfungi: ^ did that happen when I was not paying attention to my computer?19:04
fungii got sidelined working through a prerequisite19:05
fungiso that we can include rebuilds on xenial19:05
clarkbok, let us know if we can help19:05
fungi#link https://etherpad.openstack.org/p/gCj4NfcnbW tentative database move plan19:05
fungii think i'm going to just do the server rebuilds as a separate first step19:06
fungiand wanted to build the instances as, e.g., storyboard-dev01.opendev.org19:06
fungi(keeping the vhost storyboard-dev.openstack.org)19:06
fungibut yeah, that was the series of glob/regex updates to allow us to have opendev.org instance names19:07
fungiwhich i think we're through now, or at least i'm not aware of any other lingering bugs/fallout from19:07
fungiso should be able to pick that back up this week19:07
clarkbAlso now that I think about it I do have an announcement. At 2000UTC (after this meeting) corvus and mordred are doing the vexxhost + kubernetes + ceph + percona + gitea brain dump thing. I believe the plan is to record it for those that can't make it19:07
*** r-mibu has quit IRC19:08
Shrewsi hope alcohol is being served then, too19:08
anteayawhere is that taking place?19:08
clarkbfungi: and ya I think those bits should be sorted now for opendev stuff. The next big hurdle is tls/ssl for actually using opendev dns records (but we have time for that later in the agenda)19:09
clarkbanteaya: on pbx.openstack.org room 656119:09
clarkb#topic Specs approval19:09
*** openstack changes topic to "Specs approval (Meeting topic: infra)"19:09
clarkbI did end up approving the opendev git hosting and storyboard attachments specs after my flight last month19:09
clarkb#link https://review.openstack.org/581214 Anomaly Detection in CI Logs19:10
clarkbThis spec is still active and after review I do think it is close. If anyone else is able to take a look at it soon that would be great, but I'd like to check with tristanC and dirk about putting it up for approval soon19:10
clarkbcorvus: it has a small intersection with the way we host logs, but tristanC says we can do everything client side if we want to (so not a major concern but you may want to take a quick look given your work around logs and artifacts and swift)19:11
clarkb#topic Priority Efforts19:12
*** openstack changes topic to "Priority Efforts (Meeting topic: infra)"19:12
clarkb#topic Update Config Management19:12
*** openstack changes topic to "Update Config Management (Meeting topic: infra)"19:12
clarkbjust prior to and after the holidays I made a push with cmurphy to get a large stack of our futureparser changes in19:13
cmurphyit looks like pretty much all of them are in?19:13
cmurphyexcept ask.o.o and openstackid19:13
clarkbcmurphy: I think we are currently up against the fact that smarcet and fungi are doing work on openstackid hosts nowish so adding futureparser to the mix would complicate things19:13
clarkbcmurphy: I think we should rebase to put openstackid at the very end of the stack and if there are any remaining hosts not already in the list we put them before openstackid19:13
fungiyeah, smarcet is in the middle of trying to upfit openstackid source code to newer laravel and php, so has openstackid.org in the disable list19:14
cmurphyclarkb: I already did that so those should be the last in the stack19:14
clarkbcmurphy: I think afs nodes and kerberos nodes and maybe one or two tohers may still need to be added?19:15
cmurphyi didn't propose anything for the afs servers or kerberos servers, i wasn't sure if they were already mostly ansibled?19:15
cmurphythe only other one is ask19:15
clarkb(i'd have to cross check against the puppet group, which I can actually ask ansible for a full listing of and diff against the futureparser group)19:15
clarkbah ok19:15
clarkbI don't think afs or kerberos are mostly ansibled yet, but ianw was working on it?19:15
cmurphyI can add them19:16
*** vishakha has quit IRC19:16
ianwthe client side is, not the server side19:16
clarkbcmurphy: I think we should consider anything in the puppet group as valid for adding to the futureparser group19:16
cmurphyfor the actual puppet upgrade I'd rather not do it the same way as we've been doing the future parser changes because it has been taking a long time, i think it's safe to do it in bigger batches19:17
clarkbcmurphy: ya futureparser should catch most of our issues?19:17
clarkbin any case, much progress so thank you for that19:17
fungiagreed, a big bang is likely fine for that phase19:17
clarkbon the ansible + docker side of things I think we have all of the groundwork for allowing people to start sorting out actual docker hosted services19:18
clarkbI want to say that image management (of some sort) is the next big hurdle there. Likely to be a service by service thing ?19:19
mordredI'm not sure that'll be super hard- kind of similar to publishing things to pypi and stuff19:19
mordredbut yeah - until we've done it done it - we haven't done it19:19
corvusi have changes in flight to start on building images19:20
mordreddefinitely will go service to service as to whether we're consuming someone else's image or building our own19:20
clarkbcorvus: for gitea?19:20
clarkbianw ^ you may be interested in that for graphite19:20
corvus#link gitea dockerfile https://review.openstack.org/62638719:20
corvusincludes playbooks, roles, jobs for building and uploading19:21
clarkbcorvus: I wonder if we can semi standardize the building parts into zuul-jobs?19:21
clarkbthat is probably a followup task once we have something we are happy with though19:21
corvusthere are some particular things i'm doing with tags which may or may not be fully generalizable19:22
corvusi agree it's worth considering, and will be easy to move the roles if it works out19:22
mordredand easier for us to ponder generalizing once we've got a couple of examples too19:23
clarkbok so I t hink the take away here is we've removed known roadblocks and are ready for people to start building images and deploying services this way. Please let everyone know if you discover new roadblocks or things we need to consider more broadly. Otherwise I'm excited to see these efforts come together as deployed services19:24
mordreddefinitely check out the dockerfile though - it's a nice multi-stage build that winds up with an optimized final image19:24
corvusthe actual job i added in that change passes, it's just had a run of bad lock with unrelated jobs19:24
corvusi rechecked it again; we should have been able to land it last year.  :)19:24
corvusalso this one is good for folks to know about:19:25
corvus#link jinja-init dockerfile https://review.openstack.org/62662619:25
fungiwith careful manipulation of timestamps maybe we still can ;)19:25
corvusthat's a pattern we may find useful19:25
clarkbThe last subtopic in my notes for config mgmt updates is zuul driven deployment. Do we want to try picking that up again or do we want to keep focusing on puppet4 and docker related efforts for now?19:26
ShrewsIs there process documentation?19:26
corvusyou'll need to see https://review.openstack.org/626759 to see it in use19:26
corvusShrews: can you elaborate on the question?19:27
ShrewsProbably getting ahead of the next meeting, but docs on building & deploying new services19:28
clarkbOn the topic of docs I think we are still in figure it out mode for a lot of this, but once we get services running in this manner we should update the system-config docs for that service with details. Then if we end up with common practice across services have a centralized "this is what the container deployed service look like" doc19:28
corvusclarkb: ++19:28
clarkbsimilar to how we haev links to the puppet manifests we can link to the dockerfiles and the most recently built image and so on19:29
corvus#link jinja-init docs: https://github.com/ObjectifLibre/jinja-init19:29
corvuson that subject...19:29
clarkbin theory one of the things this should enable is simpler local testing/interaction19:30
clarkbwhich hopefully makes the docs easier to consume too (because you can run the thing side by side with the docs)19:30
corvusi've been thinking that for the plain docker deployments (ie, not k8s), we should look at using docker-compose19:30
fungiyeah, one i noticed could use some updating already is the dns document in system-config19:30
mordredcorvus: ++19:30
fungisince it refers to adding things in puppet19:30
clarkbcorvus: that is how I run all my containers at home so would not be opposed. We should double check if that changes our networking assumptions though19:31
corvuseven a simple single-app-in-a-container deployment has some ancillary things (like volumes) that are really well expressed via docker-compose.  and the zuul quickstart has some ansible to run docker-compose.19:31
clarkbfungi: ++19:31
mordredit has been really pleasing in its zuul quickstart context19:31
corvusfungi, clarkb: my bad on dns; i'll update docs19:31
corvus#action corvus update dns system-config docs19:31
fungicorvus: i'd have updated them but wasn't quite sure what i might be missing19:32
clarkbanything else before we move on? we have a few more agenda items I'd like to get to19:32
clarkb#topic Storyboard19:34
*** openstack changes topic to "Storyboard (Meeting topic: infra)"19:34
clarkbfungi: diablo_rojo really quickly wanted to double check that with the attachments spec merged and with mysql changes in place we've largely gotten the ball moving on infra related storyboard tasks?19:34
clarkbanything else we should ahve on our radar?19:34
diablo_rojoNothing atm, just getting started/organized for this new year19:35
fungii don't think so19:35
diablo_rojoI think there are some open reviews that could use some help19:35
diablo_rojoBut nothing other than that19:35
clarkbgood to know, thanks19:35
clarkb#topic OpenDev19:35
*** openstack changes topic to "OpenDev (Meeting topic: infra)"19:35
clarkbAt this point I think we are completetly moved over to OpenDev DNS servers for our dns hosting and we can cleanup the old openstack.org nameservers.19:36
clarkbfungi managed to update our ansible and puppet to accept openstack.org or opendev.org hostnames as appropriate19:36
clarkbThe website has content hosted on it now too (though its super simple and if people want to make it look nicer they are more than welcome)19:37
fungiyep, lmk if you notice any other problems which seem to result from that19:37
fungioh, though i think the one i ran into over the holidays turned out not to have been a result of that change19:37
funginow i don't remember what it was19:37
fungiso many eggnogs between then and now19:38
clarkbI think this means that any new servers we boot should be booted in the opendev.org domain19:38
mordredzomg. so craycray19:38
fungiwell, it's worth a try anyway19:38
clarkbwe should already specify vhost names for any of our services that have digits in their host names19:38
clarkbso this should be safe for those hosts19:38
fungias i said, i'll try launching storyboard-dev01.opendev.org rsn19:38
clarkbthat will be a good test19:38
fungii haven't exercised the latest iteration of launch-node so fingers crossed19:39
*** e0ne has quit IRC19:39
clarkbThe next step after that is actually hosting things at foo.opendev.org. For many services this requires ssl certs for foo.opendev.org and possibly altnames for foo.openstack.org (so that we support existing users of $services with redirects)19:39
clarkbLets go ahead and skip ahead in the agenda to talking about letsencrypt now considering ^19:40
fungiwe have a few we could do sooner... paste/lodgeit comes to mind19:40
clarkbfungi: ianw: do we still think that is our best path forward? and if so is it reasonable to get up and running in the near future to support some service updates (like for paste and etherpad)19:41
*** jamesmcarthur has quit IRC19:41
clarkbor should we consider continuing to pay for certs in the near term?19:41
ianwi think it's worth a proof of concept19:42
clarkb#link https://review.openstack.org/#/c/587283/ letsencrypt spec19:42
*** jamesmcarthur has joined #openstack-meeting19:42
fungias i've not had time to write up the alternative le suggestion involving separate accounts per service instead of the central broker/proxy solution, i'm not going to push back against that design even though it does seem like it would have a lot more complexity19:42
*** e0ne has joined #openstack-meeting19:43
ianwdo we know of anyone else who has deployed LE in a similar fashion?19:43
clarkbok maybe we can get a rebase on the spec and attempt to move forwadr with it? I expect this will become an important piece of the near term opendev puzzle (granted we can buy new certs too)19:44
ianwi mean in a similar project environment, i mean19:44
clarkbianw: cloudnull was using it for osic and may have had similar setup for all the various openstack services?19:44
clarkb(I don't think we need to have the answers here today, just want to point it out as a piece that has priority trending in the upward direction)19:45
ianwwell i've laid out in there the spec the issues as I see them, so I'm very open to discussion on the implementation19:46
clarkblet me know if I can help sort out details too19:46
clarkbianw: yup was motly concerned that I think fungi had reservations. Maybe we can move that discussion to spec or in irc later today?19:46
clarkbour hour is running out and we have a couple more items to get to so I'll move on19:47
ianwi can have a re-read and rebase today anyway, it has been a while so maybe on context switching back in new thoughts occur19:47
clarkb#topic General Topics19:47
*** openstack changes topic to "General Topics (Meeting topic: infra)"19:47
clarkbI wanted to check in on the github admin account changes. I think fungi fully removed himself as admin on our orgs and things are working with the admin account19:48
clarkbDo we want to forcefully remove everyone at this point and point at the admin account? or do we want everyone to remove themselves or?19:48
clarkbfungi: ^ you probably have thoguhts since you went through the process for yourself19:48
corvusi don't object to being removed19:49
fungiyeah, i got our shared account added to all the other gh orgs we maintain19:49
fungi(before i removed mine) so they're all bootstrapped now19:49
fungiat least all the orgs i'm aware of. if i was an admin for it, i added our shared admin account before removing myself. let's put it that way19:50
mordredawesome. I also don't mind being removed - or I can remove myself19:50
clarkbmy only other concern was account recovery. Do we need to encrypt and stash the recovery codes?19:50
clarkbI believe that github is fairly unforgiving about account recovery if you lose those?19:50
fungiianw's instructions in the password list were nice and clear on how to use totp on bridge to generate the second factor otp19:50
fungiand includes the recovery key too, iirc19:51
corvusyep, i've completed a test login on the new account with no issues19:51
mordredclarkb, corvus: I'm removing myself now - want me to remove you too?19:51
clarkbmordred: I think I'll remove myself so I'm familiar with the process19:51
corvusmordred: are you using the new account to do so, or your own?19:51
*** jamesmcarthur has quit IRC19:52
mordredcorvus: I was just going to use my own - but I coudl use the new account if that's better19:52
ianwi removed myself under my own account the other day19:52
ianwit seemed to work19:52
corvusmordred: for you and i, i think it would be better to use the new account, as a final sanity check that we've got everything :)19:52
mordredcorvus: that's a good point19:52
mordredin that case, I'm not going to do that just right now :)19:52
fungii used my account to remove my account (but only after i confirmed the shared account was working on those orgs of course)19:52
clarkbAnd really quickly before our hour is up I've been asked if the infra team would like to attend the PTG. The PTG runs after the summit May 2-4 in Denver.19:53
corvussince if there's some orphan org, mordred and i are the most likely to have access to that19:53
clarkbI don't need an answer now (we have until January 20 to respond)19:53
clarkbbut it would be helpful if we could get a general feel for whether or not peolpe plan to attend and whether or not we can be productive after also summiting19:53
fungii expect to be at the ptg. that beer in denver's not gonna drink itself19:53
mordredclarkb: I will be at the PTG - I'm ahppy to be there with an infra hat on19:54
clarkbI expect that if we can make it and not be zombies then it will be a great opportunity to work on opendev and the config mgmt stuff19:54
clarkbI'll followup to the infra lst with ^ this info and ask for feedback thoughts there too19:54
corvusagreed -- it's the zombie thing i'm most concerned about and need to reflect on a bit19:54
clarkbcorvus: ya me too particularly after berlin.19:54
clarkbI was really zombied19:55
clarkbanteaya: conference plague and exhaustion19:55
anteayaah thanks19:55
corvusanteaya: too tired to even use english correctly :)19:55
anteayathat makes sense19:55
clarkb#topic Open Discussion19:55
*** openstack changes topic to "Open Discussion (Meeting topic: infra)"19:55
jheskethI'm unsure if I can make Denver, but given the travel for me the more I can pack in the better.19:55
*** jamesmcarthur has joined #openstack-meeting19:55
anteayathough I can't imagine a state where you can't use english correctly corvus19:55
clarkbbefore our hour is up I'll open it up to missed/forgotten/last minute items19:56
jheskethGranted I'm usually a zombie anyway19:56
mordredjhesketh: BRAAAAINS19:56
anteayajhesketh: in a nice way19:56
clarkbjhesketh: thats a good point19:56
clarkbthe more packed in may make travel more worthwhile for some19:56
jheskethTo be honest, I'm kinda unsure why we wouldn't. We might not be as productive as usual, but what do we have to lose?19:57
fungi(besides our sanity, if we still have any i mean)19:58
clarkbjhesketh: ya I think it was more mostly a concern that if general feeling was flying home on the 2nd after summit then maybe we don't do it. But sounds like that isn't the case for most who have chimed in so far19:58
jheskethI lost that a long time ago19:58
clarkband as you say maybe we are zombies and don't get much done but we won't know until we get there and try19:58
fungimost of us have acclimated ourselves to these marathon conferences19:58
corvusthis is... longer than we've had before19:59
clarkbcorvus: especially if you factor in board meeting type pre activity19:59
clarkbits like an 8 day conference19:59
* corvus curls up in a ball under his desk19:59
clarkband with that we are just about at time. Join us for kubernetes fun in room 6561 on pbx.openstack.org20:00
clarkbthank you everyone20:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"20:00
openstackMeeting ended Tue Jan  8 20:00:15 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)20:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-01-08-19.01.html20:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-01-08-19.01.txt20:00
openstackLog:            http://eavesdrop.openstack.org/meetings/infra/2019/infra.2019-01-08-19.01.log.html20:00
fungithanks for chairing, clarkb!20:00
*** iyamahat_ has joined #openstack-meeting20:01
*** gary_perkins has quit IRC20:01
*** armax has quit IRC20:03
*** bobh has joined #openstack-meeting20:03
*** iyamahat__ has quit IRC20:03
*** gary_perkins has joined #openstack-meeting20:06
*** Shrews has left #openstack-meeting20:06
*** mlavalle has left #openstack-meeting20:08
*** bobh has quit IRC20:10
*** jamesmcarthur_ has joined #openstack-meeting20:23
*** awaugama has quit IRC20:25
*** jamesmcarthur has quit IRC20:26
*** oneswig has joined #openstack-meeting20:48
*** janders has joined #openstack-meeting20:59
oneswig#startmeeting scientific-sig21:00
openstackMeeting started Tue Jan  8 21:00:13 2019 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: scientific-sig)"21:00
openstackThe meeting name has been set to 'scientific_sig'21:00
oneswigI even spelled it right first time21:00
oneswigHello SIG21:00
oneswig#link agenda for today https://wiki.openstack.org/wiki/Scientific_SIG#IRC_Meeting_January_8th_201921:01
*** radeks has quit IRC21:02
*** jmlowe has joined #openstack-meeting21:02
oneswigI'm hoping this year to revive one or two of our oft-discussed activities21:02
jandersg'day oneswig :)21:03
jandersDenver CFP open - exciting! :)21:03
jmloweHello everybody, happy new year21:03
oneswigjanders: happy new year!21:03
oneswigHi jmlowe, same to you21:03
jandersHappy New Year 2019 to you guys! :)21:03
oneswigShould be an interesting one21:04
jandersindeed! :)21:04
oneswigjmlowe: Are you hosting the LUG again this year?21:04
jmloweI believe I still owe janders a couple of emails21:04
jmloweI don't know21:04
oneswigSome of our team here rightly made the point that we've talked a lot about better Lustre support and have yet to do anything about it.21:05
jmloweLooks like Houston21:05
*** epei has joined #openstack-meeting21:05
oneswigAh, thanks for checking21:06
jmloweso we probably have people involved, but not on campus like 201721:06
oneswigjanders: there was somebody from Pawsey with a close interest on Lustre, do you remember their details?21:06
*** martial__ has joined #openstack-meeting21:08
*** janders_ has joined #openstack-meeting21:08
*** armax has joined #openstack-meeting21:08
oneswigOur company has a team design summit at the end of this week, I'm hoping we can help generate a spec proposal.  Would be good to understand more about use cases for that21:08
janders_sorry, I got kicked out for some reason21:08
*** janders has quit IRC21:08
janders_did my question about Lustre-GPFS approach make it or was it lost?21:08
oneswighey janders_, but you gained an underscore on the way back in :-)21:09
martial__(by the end of the year I will have 12 underscore at the end of that nick :) )21:09
oneswiglost to me, sorry21:09
oneswighi martial__21:09
janders_no worries21:09
oneswig#chair martial__21:09
openstackCurrent chairs: martial__ oneswig21:09
janders_I was wondering if running Lustre via GPFS re-export could be of interest?21:09
janders_I haven't done it myself but I've got another GPFS project going and some of the participants are praising GPFS ability to re-export other filesystems21:10
oneswigyikes - could that work?  And should it?21:10
*** jbadiapa has quit IRC21:10
janders_now GPFS seems to have heaps better OpenStack integration than Lustre (at least in some use cases)21:10
oneswigIf you've paid for GPFS, you might use it natively instead.21:11
janders_I will probably test the current state of OpenStack-GPFS integration in the next month or two (I used it successfully in 2016 but not recently)21:11
oneswigInteresting thought21:11
janders_true, however i was thinking of a use case where there is a substantial amount of data already on Lustre21:11
oneswigI had in mind Manila integration with Lustre - there are people with this use case I'm aware of.  I don't think the GPFS work covers that does it?21:12
janders_no, I don't think it does Manila nicely at this point in time21:13
janders_I used it for nova, glance and cinder back in the day, I think it can do object too21:13
jmloweI'm attempting to conjure Jonathan Mills, he's done a lot of work with gpfs and openstack at NASA21:13
oneswigjmlowe: go for it :-)21:13
janders_I will report back as I learn more about the current state of things with OpenStack and GPFS21:14
martial__Magic Mike ;)21:14
jmloweSlightly related, Bob's last day at PSC is this week, he's leaving to go to NASA to work with Jonathan21:14
martial__man, did not know ... good for him21:15
oneswigTheres news - good luck to him21:15
oneswigWho's going to look after Bridges?21:15
martial__NASA is open?21:15
jmloweNot sure, the other systems guy and heir apparent is also leaving PSC this week21:16
jmloweNASA is openish, I don't think they can afford to idle the HPC machines21:16
martial__make sense21:17
janders_I just realised that what I said about GPFS&Manila is partly wrong:21:17
janders_it does support manila, however through Ganesha21:17
oneswigGood digging janders_21:18
oneswigIn the Lustre analogy, I'd love to see an Lnet router in place of Ganesha.  Would that work?21:19
oneswigjanders_: the equivalent page for queens & rocky is not present, does that mean something?21:20
jmloweI think it would, you'd have to write a driver for it21:21
jmloweother drivers fire up instances and export volumes over kernel nfs, so it stands to reason that the framework is fairly flexible21:22
oneswigIt looks like the config is still there for queens & rocky : https://docs.openstack.org/manila/rocky/configuration/shared-file-systems/samples/manila.conf.html21:22
janders_oneswig: from my experience, GPFS support is often a release or two behind21:23
*** bcm_ has joined #openstack-meeting21:23
janders_however when I last deployed it, N-1 config happily worked in the N release21:23
jmloweI could see a driver that would fire up an instance and configure it as a lnet router21:23
janders_I will test it with Queens in the near future and will report back how well or badly it works :)21:24
oneswigI think this is the person from Pawsey I was looking for: https://www.mail-archive.com/lustre-discuss@lists.lustre.org/msg15189.html21:25
*** raildo has quit IRC21:25
oneswig#action oneswig to revive discussion on Lustre + Manila with Andrew Elwell21:26
oneswigOK - move on?21:26
oneswig#topic conferences and CFPs21:27
*** openstack changes topic to "conferences and CFPs (Meeting topic: scientific-sig)"21:27
oneswigTim Randles reminded me21:28
oneswigWhen the summit goes to Denver21:28
jmlowere: gpfs and openstack "It works!" - Jonathan Mills21:28
oneswigthanks jmlowe!21:28
janders_what talks/topics would you guys be interested it?21:28
janders_thanks mate! :)21:28
*** slaweq has quit IRC21:28
janders_what versions of GPFS and OpenStack is Jonathan referring to? Would be great to know :)21:29
oneswigWhen the summit goes to Denver Tim's looking to organise a post-summit trip to the baseball for SIG members21:29
janders_Denver CFP... what topics/talks would you guys like to see? :)21:29
jmloweconference call is preventing his attendance today21:30
oneswig(but with people buying their own tickets, mind)21:30
*** artom has quit IRC21:30
jmloweI'm in for baseball21:30
oneswigjmlowe: Tim said to mail him @gmail with your interest21:30
oneswigUnfortunately I must leave promptly and can't stay for the game, much though I'd like to.21:31
jmloweI'm interested in gpu's and AI use cases, scientific computing uses of storage21:31
oneswigI've heard it's just like cricket21:31
martial__seems right21:31
oneswigjanders_: I want to hear from anything that pushes the envelope for infrastructure and performance.21:32
jmlowegeneral catch up on other people using openstack to service scientific computing21:32
oneswigAre you track chairing perchance? :-)21:33
jmloweI could I suppose21:33
jmloweI should either serve or submit21:33
oneswigI've not had the call this time round...21:34
jmloweI don't have any ideas for submissions right now so committee service it will probably be21:34
oneswigI'd be interested to hear about people doing HIPAA, Fedramp, etc in production (nb, not just thinking about it)21:35
jmloweoh, that too21:35
jmlowehow do you serve medical research use cases while protecting varying degrees of ephi21:36
oneswigThey are apparently the majority of the SIG (although none are apparently present today...)21:36
oneswigThat's good food for thought.  We should all submit talks to enrich the track21:37
janders_+1! :)21:38
oneswigjmlowe: Are you planning to go to Lugano this year?21:38
oneswigmartial__: given the emphasis on containers last year, this could be a good one for DataMachines (although are you international?)21:39
oneswig#link HPCAC CFP http://hpcadvisorycouncil.com/events/2019/swiss-workshop/submissions.php21:39
oneswigI don't see anyone online within 6 timezones of London so I'll skip that one.21:40
jmloweNo plans as of right now, usually the director level attend unless there is a specific topic that has been invited.21:41
jmloweI would like to catch up on what CSCS and Switch have been up to over the past couple of years21:42
oneswigToo bad you weren't in Berlin, we were all in a log cabin themed restaurant :-)21:43
jmloweNo broken legs?21:44
oneswigIt was all "simulated"...21:45
jmlowe2 different SC attendees broke their tibias a few hours after I had spoken with them21:45
oneswigjmlowe: anger management course needed perhaps?21:45
martial__that and bad drivers in Dallas we are told21:46
jmlowevehicle vs pedestrian and Blair's parkour could use some work21:46
oneswigWhat are the Americas conferences coming up?  Is there a CFP for the NSF conference?21:47
jmlowemost of the nsf resource providers will be here https://www.pearc.org/21:48
jmloweChicago this yar21:48
oneswigAre you going?21:48
jmloweI don't see how I can't21:49
*** factor has joined #openstack-meeting21:50
oneswig#topic AOB21:50
*** openstack changes topic to "AOB (Meeting topic: scientific-sig)"21:50
janders_going back to the manila topic:21:51
janders_I wonder if it is possible to take a slightly more generic approach to RDMA filesytems21:51
janders_so that we can do Lustre, BeeGFS, maybe GPFS-native through Manila21:52
oneswigjanders_: it does make sense that there are some common challenges.  Quite a few differences too though.21:52
janders_at the end of the day, if the implementation involves spinning up dual-homed storage router instances, there might be enough overlap to support this approach21:53
oneswigBut I do agree, if something was to be created, we wouldn't want a vendor to make it exclusive to one technology21:53
*** gouthamr has left #openstack-meeting21:53
janders_what's your feel regarding Thinkparq's interest in OpenStack integration?21:54
janders_(of BeeGFS)?21:54
janders_do you think they might be happy to get some of their guys to work on it or is it more like "if the community does the work, we'll help them out a bit"?21:55
oneswigI don't know21:56
oneswigI know they are enthusiastic for OpenStack support21:56
oneswigbut I am not sure what they can commit themselves.21:56
oneswigI think they'll do what they can to help.  I don't know what that means.21:57
janders_I will ponder this topic a bit more21:57
oneswigWe are nearly at the hour21:58
janders_we're in regular touch with Thinparq regarding our BeeGFS work21:58
oneswigjmlowe: wanted to ask, how is Jetstream?21:58
*** armax has quit IRC21:58
janders_I will bring this up when the time is right (I want them focused on the scratch filesystem for now :)21:58
oneswigjanders_: I expect you know much more than I do then21:58
oneswigI only see them at conferences21:58
janders_unfortunately not on this topic (at least not yet)21:59
janders_thanks for the chat guys!21:59
janders_have a good one21:59
oneswigtime to close, thanks all22:00
janders_all the best in the New Year 2019! :)22:00
janders_it'll be an exciting one22:00
oneswigjanders_: no fate but what we make :-)22:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"22:00
openstackMeeting ended Tue Jan  8 22:00:29 2019 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-01-08-21.00.html22:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-01-08-21.00.txt22:00
openstackLog:            http://eavesdrop.openstack.org/meetings/scientific_sig/2019/scientific_sig.2019-01-08-21.00.log.html22:00
*** oneswig has quit IRC22:01
*** hongbin has joined #openstack-meeting22:01
*** mjturek has quit IRC22:03
*** gouthamr has joined #openstack-meeting22:04
*** bcm_ has quit IRC22:04
*** trandles has joined #openstack-meeting22:06
*** trandles has quit IRC22:07
*** carlos_silva has quit IRC22:08
*** haleyb is now known as haleyb|away22:16
*** epei has quit IRC22:20
*** slaweq has joined #openstack-meeting22:23
*** janders_ has quit IRC22:23
*** e0ne has quit IRC22:27
*** armax has joined #openstack-meeting22:53
*** rcernin has joined #openstack-meeting22:53
*** jesusaur has quit IRC23:02
*** _alastor1 has joined #openstack-meeting23:28
*** armax has quit IRC23:29
*** armax has joined #openstack-meeting23:30
*** _alastor_ has quit IRC23:31
*** jamesmcarthur_ has quit IRC23:43
*** jamesmcarthur has joined #openstack-meeting23:45
*** jamesmcarthur has quit IRC23:50
*** liuyulong has joined #openstack-meeting23:51
*** armax has quit IRC23:54

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!