Tuesday, 2021-01-12

*** jamesmcarthur has joined #openstack-meeting-300:08
*** macz_ has quit IRC00:10
*** tosky has quit IRC00:15
*** macz_ has joined #openstack-meeting-300:51
*** macz_ has quit IRC00:56
*** jamesmcarthur has quit IRC01:23
*** jamesmcarthur has joined #openstack-meeting-301:35
*** baojg has joined #openstack-meeting-301:49
*** macz_ has joined #openstack-meeting-302:04
*** macz_ has quit IRC02:09
*** jamesmcarthur has quit IRC02:24
*** jamesmcarthur has joined #openstack-meeting-302:37
*** baojg has quit IRC02:38
*** _mlavalle_1 has quit IRC02:55
*** hemanth_n has joined #openstack-meeting-303:11
*** ricolin has quit IRC03:19
*** psachin has joined #openstack-meeting-303:31
*** jamesmcarthur has quit IRC03:41
*** jamesmcarthur has joined #openstack-meeting-303:43
*** ricolin_ has joined #openstack-meeting-304:17
*** jamesmcarthur has quit IRC04:59
*** jamesmcarthur has joined #openstack-meeting-305:00
*** jamesmcarthur has quit IRC05:05
*** jamesmcarthur has joined #openstack-meeting-306:00
*** jamesmcarthur has quit IRC06:05
*** macz_ has joined #openstack-meeting-306:06
*** macz_ has quit IRC06:11
*** jamesmcarthur has joined #openstack-meeting-307:09
*** jamesmcarthur has quit IRC07:15
*** jamesmcarthur has joined #openstack-meeting-307:30
*** jamesmcarthur has quit IRC07:34
*** eolivare has joined #openstack-meeting-307:38
*** ralonsoh has joined #openstack-meeting-307:48
*** slaweq has joined #openstack-meeting-308:03
*** tosky has joined #openstack-meeting-308:22
*** jamesmcarthur has joined #openstack-meeting-308:51
*** e0ne has joined #openstack-meeting-308:53
*** jamesmcarthur has quit IRC08:55
*** ricolin_ has quit IRC09:39
*** jamesmcarthur has joined #openstack-meeting-310:15
*** jamesmcarthur has quit IRC10:19
*** hemanth_n has quit IRC10:23
*** otherwiseguy has quit IRC11:07
*** otherwiseguy has joined #openstack-meeting-311:12
*** e0ne has quit IRC11:16
*** e0ne has joined #openstack-meeting-311:39
*** jamesmcarthur has joined #openstack-meeting-311:44
*** e0ne has quit IRC11:44
*** Luzi has joined #openstack-meeting-311:47
*** e0ne has joined #openstack-meeting-311:47
*** jamesmcarthur has quit IRC11:49
*** e0ne has quit IRC11:54
*** ricolin_ has joined #openstack-meeting-312:03
*** raildo has joined #openstack-meeting-312:14
*** ricolin_ has quit IRC12:47
*** jamesmcarthur has joined #openstack-meeting-313:08
*** jamesmcarthur has quit IRC13:12
*** jamesmcarthur has joined #openstack-meeting-313:14
*** jamesmcarthur has quit IRC13:19
*** obondarev has joined #openstack-meeting-313:57
*** mlavalle has joined #openstack-meeting-313:58
*** ricolin_ has joined #openstack-meeting-313:58
*** ricolin_ has quit IRC13:58
*** ricolin has joined #openstack-meeting-313:59
slaweq#startmeeting networking14:00
openstackMeeting started Tue Jan 12 14:00:17 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.14:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.14:00
*** openstack changes topic to " (Meeting topic: networking)"14:00
openstackThe meeting name has been set to 'networking'14:00
slaweqwelcome :)14:00
obondarevhi all14:00
slaweqok, lets start14:02
slaweqHNY for those who I see first time this year :)14:02
slaweq#topic announcements14:02
*** openstack changes topic to "announcements (Meeting topic: networking)"14:02
slaweqnext week is Wallaby-2 milestone already14:02
slaweqwe still have couple of specs to review14:03
slaweqplease check and try to review them this week if possible14:03
slaweqfor now we merged only https://review.opendev.org/c/openstack/neutron-specs/+/73954914:03
slaweqand that are all announcements/reminders for today14:04
slaweqanything else You want to share with the team?14:04
slaweqok, I guess that this means "no" :)14:05
slaweqso lets move on14:05
slaweq#topic Blueprints14:06
*** openstack changes topic to "Blueprints (Meeting topic: networking)"14:06
slaweqthose are things scheduled for wallaby-214:06
slaweqdo You have any updates about any of them?14:06
slaweqI have update about https://blueprints.launchpad.net/neutron/+spec/enginefacade-switch14:07
slaweqwe have really all merged in neutron now14:07
slaweqI just abandoned old "conclusion" patch which was used to check what is still needed to change14:08
bcafarelmission accomplished :)14:08
slaweqI also checked stadium projects today14:08
slaweqwe still have some changes to do in midonet and vpnaas repos14:08
slaweqand some minor changes in neutron docs14:08
slaweqI will try to push patches this week14:08
slaweqso mission is almost accomplished14:09
ralonsohwell done14:09
slaweqas ralonsoh sum up in some comment - it was (is) 4 years effort :)14:09
slaweqanother update, about https://blueprints.launchpad.net/neutron/+spec/secure-bac-roles14:09
slaweqwe have patches to migrate all our rbac roles to new roles already14:10
slaweqso please take a look at those patches14:11
slaweqwe need to check all of them carefully to be able we are not breaking/changing something by mistake14:11
slaweqanother update, about https://blueprints.launchpad.net/neutron/+spec/default-dns-zone-per-tenant14:12
slaweqthis is old BP which is almost done14:12
slaweqmissing patches are:14:12
slaweqNeutron patch https://review.opendev.org/#/c/686343/14:12
slaweqneutron-tempest-plugin test https://review.opendev.org/#/c/733994/14:12
slaweqplease take a look at them also14:13
slaweqand regarding https://blueprints.launchpad.net/neutron/+spec/address-groups-in-sg-rules14:13
slaweqpatches are on https://review.opendev.org/#/q/topic:bp/address-groups-in-sg-rules14:13
slaweqplease add them to the list of Your review queues as well :)14:13
slaweqand that's all from my side about BPs14:14
slaweqso lets move on to the next topic14:16
slaweq#topic Community goals14:16
*** openstack changes topic to "Community goals (Meeting topic: networking)"14:16
slaweqralonsoh: amotoki: any updates?14:16
ralonsohstill working on the rootwrap deprecation14:16
ralonsohand attempt to migrate, in a generic way, the rootwrap execution14:17
ralonsohI tried to do this in one shot, and it was impossible14:17
ralonsohthat's why I'm taking this approach14:17
ralonsohbtw, we are executing some long live commands using rootwrap14:17
ralonsohthis is incorrect, these executions should be done using a service wrapper14:18
ralonsohI'll try to fix that in next patches14:18
ralonsoh(that's all)14:18
slaweqok, a lot of work14:18
slaweqif You need any help, please ping me14:18
ralonsoh(another 4 years hehehehe)14:19
amotokigo disconnected...14:19
amotokiregarding the policy stuff, https://review.opendev.org/c/openstack/neutron/+/764401 looks a good shape but it needs to pass the gate. I will check the status.14:19
*** lajoskatona has joined #openstack-meeting-314:19
slaweqamotoki: thx14:20
slaweqthis patch is in the check queue now AFAICT14:20
slaweqso this should be much faster than ralonsoh's 4 years effort with rootwrap :P14:21
amotokias it is much smaller than the rootwrap one :p14:21
slaweqyep :)14:22
slaweqok, lets move on14:22
slaweqnext topic14:22
slaweq#topic Bugs14:22
*** openstack changes topic to "Bugs (Meeting topic: networking)"14:22
slaweqlajoskatona was our bug deputy 2 weeks ago, his report is at http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019602.html14:22
slaweqand amotoki was bug deputy last week14:22
lajoskatonayeah that was a quiet week14:22
slaweqany bugs You want to highligh here?14:22
amotokiI just sent it before the meeting http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019781.html14:23
amotokisorry for late14:23
lajoskatonathe only one was about security-group operations that are slow14:23
lajoskatonaand recently there was mail about that on mail/list too14:23
amotokiit was a quite week and half of them are about gate failures14:23
lajoskatonaI think this was the mail: http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019685.html14:25
amotokiregaridng the secutiry group issue, is it related to RBAC support in sec group? if so, I think mlavalle's fix improves the performance again14:25
lajoskatonaas I remember it's not clear from the bug or from the mail, but possible that recent fixes improve the performance14:26
ralonsohI think this is more related to the amount of data you need to transmit to the compute nodes when using FW (as slaweq commented in the mail)14:26
ralonsohI don't see any problem in the DB request14:26
ralonsoh(apart from the amount of SG and rules)14:27
slaweqthe LP bug was more about API request to list SGs14:28
slaweqand ML thread was about applying SG rules on compute node14:28
slaweqso IMO those are different problems14:28
slaweqbut maybe I checked different ML thread14:29
*** Luzi has quit IRC14:31
slaweqI see that LP bug is now marked as "Opinion" and reprorted wrote that will check it on newer version14:31
slaweqlets wait for result of that test and we will see if that is still an issue14:31
lajoskatonano, you are right, I checked and the mail was to apply rules to VMs on computes  and the lp bug was to list them14:32
slaweqany other bugs You want to discuss today?14:33
slaweqif not, then lets move on14:34
slaweqour bug deputy this week is mlavalle14:34
slaweqand next week will be rubasov14:35
slaweqrubasov: is that ok for You?14:35
mlavalleI have a question14:35
slaweqthank You both :)14:35
slaweqmlavalle: sure14:35
mlavallewhat happens to bugs in the gap between one deputy and the next one14:35
slaweqdo we have such gaps?14:36
mlavallefor example: https://bugs.launchpad.net/neutron/+bug/1910691. Is this amotoki 's report?14:36
openstackLaunchpad bug 1910691 in neutron "In "test_dvr_router_lifecycle_ha_with_snat_with_fips", the HA proxy PID file is not removed" [Undecided,New]14:36
ralonsohthat's mine14:37
slaweqthis was reported on last Friday so it should be included in amotoki's report usually14:37
slaweqmaybe he forgot about it :)14:37
ralonsoh(sorry, the report)14:37
mlavalleso report has to include from Monday to Sunday?14:37
slaweqI always though that deputy is for one week, so Monday to Monday more or less14:37
bcafarelI see it in http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019781.html14:38
slaweqthat was my understanding always :)14:38
bcafarelmine too :)14:38
slaweqthx bcafarel :)14:38
mlavallelet's move on14:38
slaweqindeed, it's there14:38
slaweqsure, thx for asking mlavalle14:38
*** jamesmcarthur has joined #openstack-meeting-314:38
slaweqok, so let's move on14:39
slaweq#topic On demand14:39
*** openstack changes topic to "On demand (Meeting topic: networking)"14:39
slaweqwe have one additional topic for today14:39
slaweqmaybe You already saw thread on ML14:39
slaweqbut I wanted to see what do You think about Drop (or not) l-c testing --> http://lists.openstack.org/pipermail/openstack-discuss/2021-January/019659.html.14:40
slaweqI know that tc will be discussing this on their next meeting14:40
slaweqbut it also seems that projects can make own decisions on that if want14:40
slaweqso e.g. oslo dropped it already14:41
slaweqwdyt about that in neutron and stadium projects?14:41
ralonsohwhat's the benefit of l-c testing?14:41
ralonsohapart from checking that we can run Neutron with lowest library versions14:41
bcafarelindeed we may get an official "you can all drop it" in a few days, but in the meantime14:42
slaweqralonsoh: I don't have a lot of experience with that but main argument on ML was that it's useful for packaging14:42
amotokione merit is that it detects our lower bounds are correct or not, but I am not sure how the merit is vaulable.14:42
bcafarelack the idea has merit (does neutron actually work with lower versions as mentioned in l-c)14:42
bcafarelbut until recently the l-c file was not really applicable anyway14:43
bcafareland it is not perfect now either - see recent discussions on ML14:43
amotokiyeah, it is usually broken till the improvement in the latest pip resolver came.14:44
lajoskatonaregarding stadiums most of them have trouble with l-c jobs14:44
lajoskatonaperidocally we can say :-)14:44
ralonsohand neutron stable versions14:44
bcafarel+ multiply the effort by number of stable branches14:44
bcafarelralonsoh: :) great minds think alike14:44
lajoskatonaas I understand it must be periodically maintained, and that is a huge work14:44
amotokiIMHO we don't need it for stable branches as we rarely change requirements after the release.14:45
ralonsohI'll say that: we want to drop this CI job14:45
bcafarelamotoki: right, plus it does not probably help many people if we fix lower constraints after the release14:46
slaweqso You propose to drop this job from stable branches and keep it (for now) in master, right?14:46
bcafarelat least first step :)14:46
amotokiyes for the stable branches14:46
slaweqsounds reasonable for me14:47
bcafarelI think in master it should probably be thought/redesigned again to be useful, TC opinion should give pointers I think14:47
slaweqanyone wants to send patches then?14:48
ralonsohI can14:48
bcafarelmore than motivated to send a few too to kick these out :)14:48
slaweqok, so I think we are good with today's meeting14:48
amotokiregarding the master branch, I see a value to some extent. I think the recent situation is a bit different as the new pip resolve was introduced and the inconsistencies were revealed.14:48
amotokithat's all from me14:49
slaweqthx amotoki14:50
slaweqthx for attending the meeting today14:50
bcafarelthanks and HNY!14:50
slaweqplease remember that we have ci meeting in 10 minutes here :)14:50
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"14:50
openstackMeeting ended Tue Jan 12 14:50:29 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)14:50
openstackMinutes:        http://eavesdrop.openstack.org/meetings/networking/2021/networking.2021-01-12-14.00.html14:50
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/networking/2021/networking.2021-01-12-14.00.txt14:50
openstackLog:            http://eavesdrop.openstack.org/meetings/networking/2021/networking.2021-01-12-14.00.log.html14:50
slaweq#startmeeting neutron_ci15:00
openstackMeeting started Tue Jan 12 15:00:17 2021 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.15:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.15:00
*** openstack changes topic to " (Meeting topic: neutron_ci)"15:00
openstackThe meeting name has been set to 'neutron_ci'15:00
bcafarelhi again15:00
slaweqGrafana dashboard: http://grafana.openstack.org/dashboard/db/neutron-failure-rate15:01
slaweqplease open and we can start15:01
slaweq#topic Actions from previous meetings15:02
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)"15:02
slaweqslaweq to update grafana dashboard15:02
slaweq    Patch https://review.opendev.org/c/openstack/project-config/+/76747015:02
slaweqit's merged already15:02
slaweqand that was the only action from last meeting15:02
slaweqso I think we can move on15:03
slaweqnext topic is15:03
slaweq#topic Stadium projects15:03
*** openstack changes topic to "Stadium projects (Meeting topic: neutron_ci)"15:03
slaweqany ci related topics for stadium projects?15:03
bcafarelmost are still red with pip fun I think15:03
bcafarelI saw some patches starting to appear for them, not sure if they are working (or merged?)15:04
slaweqbcafarel: but it's for master or stable branches?15:05
bcafarelhttps://review.opendev.org/c/openstack/networking-bgpvpn/+/769657 for example15:05
bcafarelslaweq: master, I will send patches to drop l-c jobs on stable soonish ( ralonsoh taking the neutron ones)15:05
slaweqok, I will check those l-c patches for master branch then15:06
slaweqanything else regarding stadium or we can move on?15:07
slaweqso lets move on15:08
slaweq#topic Stable branches15:08
*** openstack changes topic to "Stable branches (Meeting topic: neutron_ci)"15:08
slaweqVictoria dashboard: https://grafana.opendev.org/d/HUCHup2Gz/neutron-failure-rate-previous-stable-release?orgId=115:08
slaweqUssuri dashboard: https://grafana.opendev.org/d/smqHXphMk/neutron-failure-rate-older-stable-release?orgId=115:08
slaweqexcept that l-c issue, I think all is good there15:08
bcafarelindeed, I saw some failures but nothing too bad15:08
bcafarelstein is waiting for rocky swift grenade fix (it was W+1 30 min ago), and then all branches should be back in working order15:09
bcafarelpending l-c cleanup (fix for it is merged up to train at the moment)15:09
slaweqthx bcafarel for taking care of it15:09
slaweqI think we can move on to the next topic then15:10
slaweq#topic Grafana15:10
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)"15:10
slaweq#link https://grafana.opendev.org/d/PfjNuthGz/neutron-failure-rate?orgId=115:10
bcafareland haleyb and a few others too :) it will be nice to forget that part15:10
slaweqhi haleyb :)15:11
haleybare there gate failures? :-p15:11
slaweqthx for helping with pip issues :)15:11
bcafarelhaleyb: no I was just pointing out you helped/suffered a lot with that new fancy pip resolver too15:11
haleybi feel like it's been Thor's hammer kind of firedrill15:11
ralonsohgood work on this!15:12
* haleyb is still suffering with gate things15:12
slaweqhaleyb: we all suffer with gate things :P15:12
slaweqbut, speaking about gate and grafana15:13
slaweqthings looks much better IMO this week15:13
slaweqor even this year ;)15:13
slaweqI saw surprisingly many patches merged recently without rechecking dozens of times :)15:14
slaweqdo You have anything related to our dashboard?15:15
slaweqor we can move on to some specific issues which I found recently?15:15
bcafarelnothing from me15:15
slaweqok, so let's move on15:17
slaweq#topic functional/fullstack15:17
*** openstack changes topic to "functional/fullstack (Meeting topic: neutron_ci)"15:17
slaweqthose jobs are still most often failing ones15:17
slaweqfirst functional15:17
slaweqI again saw this error 500 during network creation in ovn tests: https://zuul.opendev.org/t/openstack/build/476b4b1684df45bca7ecebbd2d7353b9/logs15:18
slaweqbut that was only once and I'm not sure if otherwiseguy's patch was already merged then or not yet15:18
* otherwiseguy looks15:18
slaweqIIRC it was this patch https://review.opendev.org/c/openstack/neutron/+/76587415:20
slaweqand it was merged Jan 5th15:20
slaweqand failure which I saw was from Jan 4th15:20
slaweqso now we should be good with that issue finally15:20
otherwiseguyah, yeah.15:21
* otherwiseguy crosses fingers15:21
ralonsohthe problem, I think, this is not working with wsgi15:21
ralonsohbecause we don't call "post_fork_initialize"15:21
ralonsohbut I think lucas is investigating this15:21
slaweqyes, he is working on that issue with uwsgi15:22
otherwiseguyI remember functional test base manually calling post_fork_initialize?15:22
ralonsohyes, in _start_ovsdb_server_and_idls15:23
slaweqok, lets move on15:25
slaweqI also found issue with neutron.tests.functional.agent.common.test_ovs_lib.BaseOVSTestCase.test_update_minimum_bandwidth_queue_no_qos_no_queue15:25
slaweqdid You saw such failures already?15:26
otherwiseguyralonsoh and I talked about this one I believe.15:26
ralonsohone sec15:26
otherwiseguyI think we discovered that two tests were using the same port name and maybe that was causing an issue?15:26
slaweqahh, right15:26
ralonsohboth patches should help15:26
slaweqI saw this patch today15:26
slaweqboth already merged15:27
slaweqso we should be ok with those15:27
slaweqthx ralonsoh15:27
otherwiseguyyay ralonsoh :)15:27
slaweqand that's all regarding functional job15:27
slaweqnow fullstack15:28
slaweqhere I found one issue15:28
slaweqwith neutron.tests.fullstack.test_qos.TestMinBwQoSOvs.test_min_bw_qos_port_removed15:28
slaweqand I saw it twice:15:28
slaweq    https://storage.bhs.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_18a/740569/2/check/neutron-fullstack-with-uwsgi/18a1d60/testr_results.html15:28
slaweq    https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f87/749012/15/check/neutron-fullstack-with-uwsgi/f87df94/testr_results.html15:28
ralonsohI'll take a look at this15:29
slaweqralonsoh: thx15:29
ralonsohat least I'll add some logs to print the qoses and queues15:29
slaweqin logs there is some error RowNotFound: https://storage.gra.cloud.ovh.net/v1/AUTH_dcaab5e32b234d56b626f72581e3644c/zuul_opendev_logs_f87/749012/15/check/neutron-fullstack-with-uwsgi/f87df94/controller/logs/dsvm-fullstack-logs/TestBwLimitQoSOvs.test_bw_limit_qos_port_removed_egress_.txt15:29
ralonsohmaybe I need to do the same as in FT, add a waitevent15:30
slaweqin both cases there is same error15:30
slaweqralonsoh: maybe15:30
ralonsohperfect, that's "good"15:30
slaweq#action ralonsoh will check fullstack test_min_bw_qos_port_removed issues15:30
slaweqthank You15:31
slaweq#topic Tempest/Scenario15:31
*** openstack changes topic to "Tempest/Scenario (Meeting topic: neutron_ci)"15:31
slaweqhere I found just one issue in neutron-tempest-plugin-scenario-ovn15:31
slaweqbut I saw this issue only once so far15:32
slaweqdid You saw it also maybe?15:32
otherwiseguyI suppose I can take a look at it.15:33
slaweqotherwiseguy: thx a lot15:33
slaweqI will report that in LP and I will give You link to LP later15:34
otherwiseguyI can't just keep complaining about CI and not fix things I suppose. :p15:34
slaweqotherwiseguy: everyone is doing that :P15:34
slaweqthx a lot for Your help, it's really appreciated :)15:34
slaweqok, that's all issue regarding scenario jobs for today15:35
slaweqthose jobs seems to be pretty stable recently IMO15:35
slaweqlets move on15:35
slaweq#topic Periodic15:35
*** openstack changes topic to "Periodic (Meeting topic: neutron_ci)"15:35
slaweqI noticed that neutron-ovn-tempest-ovs-master-fedora perdiodic job is failing 100% of times since few days15:36
slaweqI opened bug https://bugs.launchpad.net/neutron/+bug/191112815:36
openstackLaunchpad bug 1911128 in neutron "Neutron with ovn driver failed to start on Fedora" [Critical,Confirmed]15:36
slaweqotherwiseguy: can You maybe take a look at that one? :)15:37
otherwiseguyslaweq: sure :)15:37
slaweqlooks for me like maybe ovn isn't started at all there15:38
slaweqbut it's failing like that every day on fedora job15:38
slaweqthx otherwiseguy15:38
otherwiseguyyeah: CRITICAL neutron [None req-4c1185cb-214e-4848-91b8-ea3b529f1d30 None None] Unhandled error: neutron_lib.callbacks.exceptions.CallbackFailure: Callback neutron.plugins.ml2.drivers.ovn.mech_driver.mech_driver.OVNMechanismDriver.pre_fork_initialize-627113 failed with "Could not retrieve schema from ssl:"15:38
otherwiseguythat doesn't seem good. :p15:39
slaweq#action otherwiseguy to check fedora ovn periodic job issue15:39
slaweqok, that are all ci related things for today from me15:39
slaweqdo You have anything else You want to discuss today?15:39
bcafarelstable ci should work better soon, seeing the series of "drop l-c" patches appearing in #openstack-neutron :)15:40
slaweqyeah, I saw it :)15:40
slaweqthx ralonsoh and bcafarel for sending them15:40
slaweqthx for attending the meeting15:40
slaweqand see You online15:40
bcafareland lajoskatona too15:40
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"15:41
openstackMeeting ended Tue Jan 12 15:41:03 2021 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)15:41
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ci/2021/neutron_ci.2021-01-12-15.00.html15:41
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2021/neutron_ci.2021-01-12-15.00.txt15:41
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_ci/2021/neutron_ci.2021-01-12-15.00.log.html15:41
slaweqright, thx lajoskatona too :)15:41
*** obondarev has quit IRC15:46
*** ralonsoh has left #openstack-meeting-315:47
*** macz_ has joined #openstack-meeting-315:51
*** macz_ has quit IRC15:51
*** macz_ has joined #openstack-meeting-315:52
*** ralonsoh has joined #openstack-meeting-316:00
*** psachin has quit IRC16:13
*** e0ne has joined #openstack-meeting-316:18
*** mlavalle has quit IRC17:07
*** mlavalle has joined #openstack-meeting-317:07
*** slaweq has quit IRC17:10
*** artom has quit IRC17:10
*** e0ne has quit IRC17:12
*** artom has joined #openstack-meeting-317:30
*** e0ne has joined #openstack-meeting-317:33
*** e0ne has quit IRC17:35
*** eolivare has quit IRC17:47
*** ralonsoh has quit IRC18:02
*** jamesmcarthur has quit IRC18:32
*** jamesmcarthur has joined #openstack-meeting-318:34
*** jamesmcarthur has quit IRC18:39
*** ianychoi_ has quit IRC18:46
*** jamesmcarthur has joined #openstack-meeting-319:17
*** jamesmcarthur has quit IRC19:22
*** e0ne has joined #openstack-meeting-319:35
*** e0ne has quit IRC19:39
*** slaweq has joined #openstack-meeting-319:44
*** slaweq has quit IRC20:03
*** e0ne has joined #openstack-meeting-320:05
*** e0ne has quit IRC20:18
*** e0ne has joined #openstack-meeting-320:19
*** e0ne has quit IRC20:34
*** e0ne_ has joined #openstack-meeting-320:34
*** jamesmcarthur has joined #openstack-meeting-320:36
*** jamesmcarthur has quit IRC20:40
*** e0ne_ has quit IRC20:40
*** jamesmcarthur has joined #openstack-meeting-320:40
*** raildo has quit IRC22:05
*** jamesmcarthur has quit IRC23:16
*** jamesmcarthur has joined #openstack-meeting-323:31
*** jamesmcarthur has quit IRC23:35
*** jamesmcarthur has joined #openstack-meeting-323:35

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!