Tuesday, 2018-10-02

*** dtrainor_ has quit IRC00:04
*** macza has quit IRC00:04
*** mriedem has quit IRC00:05
*** slaweq has joined #openstack-meeting00:11
*** slaweq has quit IRC00:15
*** rossella_s has quit IRC00:18
*** rossella_s has joined #openstack-meeting00:19
*** tetsuro has joined #openstack-meeting00:21
*** longkb has joined #openstack-meeting00:24
*** rossella_s has quit IRC00:36
*** rossella_s has joined #openstack-meeting00:37
*** longkb has quit IRC00:48
*** rossella_s has quit IRC00:54
*** rossella_s has joined #openstack-meeting00:54
*** jamesmcarthur has joined #openstack-meeting00:55
*** jamesmcarthur has quit IRC01:03
*** rossella_s has quit IRC01:03
*** rossella_s has joined #openstack-meeting01:04
*** efried has joined #openstack-meeting01:15
*** ijw has joined #openstack-meeting01:16
*** ijw has quit IRC01:21
*** cloudrancher has quit IRC01:22
*** hongbin has joined #openstack-meeting01:23
*** cloudrancher has joined #openstack-meeting01:23
*** Leo_m has joined #openstack-meeting01:29
*** Leo_m has quit IRC01:29
*** rossella_s has quit IRC01:30
*** rossella_s has joined #openstack-meeting01:30
*** yamahata has quit IRC01:39
*** iyamahat has quit IRC01:39
*** efried has quit IRC02:10
*** efried1 has joined #openstack-meeting02:10
*** slaweq has joined #openstack-meeting02:11
*** efried1 is now known as efried02:12
*** dmacpher_ has joined #openstack-meeting02:13
*** rossella_s has quit IRC02:14
*** rossella_s has joined #openstack-meeting02:14
*** slaweq has quit IRC02:15
*** jamesmcarthur has joined #openstack-meeting02:23
*** rossella_s has quit IRC02:25
*** rossella_s has joined #openstack-meeting02:26
*** yamamoto has quit IRC02:32
*** yamamoto has joined #openstack-meeting02:33
*** yamamoto has quit IRC02:37
*** rbudden has joined #openstack-meeting02:46
*** rossella_s has quit IRC02:52
*** rossella_s has joined #openstack-meeting02:52
*** jamesmcarthur has quit IRC03:01
*** longkb has joined #openstack-meeting03:04
*** yamamoto has joined #openstack-meeting03:05
*** jamesmcarthur has joined #openstack-meeting03:09
*** yamamoto has quit IRC03:12
*** ykatabam has quit IRC03:13
*** hongbin has quit IRC03:15
*** rossella_s has quit IRC03:22
*** rossella_s has joined #openstack-meeting03:23
*** diablo_rojo has quit IRC03:27
*** jamesmcarthur has quit IRC03:28
*** rossella_s has quit IRC03:28
*** rossella_s has joined #openstack-meeting03:29
*** rbudden has quit IRC03:32
*** ijw has joined #openstack-meeting03:36
*** sagara has joined #openstack-meeting03:46
*** yamamoto has joined #openstack-meeting03:47
*** tpatil has joined #openstack-meeting03:58
tpatilHi Masakari members04:01
tpatilToday Sampath has taken day-off so I would like to chair today’s masakari meeting04:01
*** rossella_s has quit IRC04:02
*** rossella_s has joined #openstack-meeting04:02
tpatil#startmeeting Masakari04:04
openstackMeeting started Tue Oct  2 04:04:08 2018 UTC and is due to finish in 60 minutes.  The chair is tpatil. Information about MeetBot at http://wiki.debian.org/MeetBot.04:04
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.04:04
*** openstack changes topic to " (Meeting topic: Masakari)"04:04
openstackThe meeting name has been set to 'masakari'04:04
tpatilIn the last meeting, I had requested to review few patches04:05
tpatilsagara: did you get time to review those patches?04:06
*** tpatil_ has joined #openstack-meeting04:06
sagaraYes, I remembered. But not yet. Could you write the those gerrit link?04:07
tpatil_#link : https://review.openstack.org/#/c/590735/04:07
tpatil_#link : https://review.openstack.org/#/c/591557/04:08
sagaraI'll check them after IRC04:09
tpatil_Thank you04:09
tpatil_#topic : Bugs (stuck/critical)04:10
*** tpatil has quit IRC04:10
openstackLaunchpad bug 1779752 in masakari-monitors "masakari segment-list returns error (openstack queens)" [Critical,Confirmed] - Assigned to Shilpa Devharakar (shilpasd)04:11
*** slaweq has joined #openstack-meeting04:11
tpatil_This bug is fixed in the above listed proposed patches. It's marked as critical so we should merge it ASAP04:11
*** rossella_s has quit IRC04:12
tpatil_There are two other high priority bugs assigned to samP04:12
tpatil_He is not available today so we can skip discussion about these two bugs04:12
tpatil_Is there any other bugs that you want to discuss now?04:13
*** rossella_s has joined #openstack-meeting04:14
sagaraYes, let's skip discussion. But could you write down those samP's gerrit urls just in case.04:14
tpatil_sagara: I missed to inform you about this patch : https://review.openstack.org/#/c/562973/04:14
tpatil_Please review the above patch as well04:15
*** ykatabam has joined #openstack-meeting04:15
tpatil_From my end, we are making slow progress on adding notifications and also writing functional tests in openstacksdk04:15
*** slaweq has quit IRC04:16
sagaraOK, I will review 590735, 591557, 562973 as soon as possible04:16
tpatil_Thank you04:17
tpatil_Let's end this meeting if there is no other topic left for discussion04:19
sagaraI don't have other special topics04:19
tpatil_Ok, Let's end this meeting then04:20
tpatil_Thank you04:20
*** tpatil has joined #openstack-meeting04:21
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"04:22
openstackMeeting ended Tue Oct  2 04:22:03 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)04:22
openstackMinutes:        http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-10-02-04.04.html04:22
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-10-02-04.04.txt04:22
openstackLog:            http://eavesdrop.openstack.org/meetings/masakari/2018/masakari.2018-10-02-04.04.log.html04:22
*** tpatil has quit IRC04:22
*** tpatil_ has quit IRC04:26
*** rossella_s has quit IRC04:26
*** rossella_s has joined #openstack-meeting04:28
*** pcaruana has joined #openstack-meeting04:36
*** pcaruana has quit IRC04:43
*** rossella_s has quit IRC04:44
*** rossella_s has joined #openstack-meeting04:45
*** sagara has quit IRC04:57
*** rossella_s has quit IRC05:02
*** rossella_s has joined #openstack-meeting05:02
*** e0ne has joined #openstack-meeting05:06
*** e0ne has quit IRC05:06
*** slaweq has joined #openstack-meeting05:11
*** slaweq has quit IRC05:15
*** rossella_s has quit IRC05:16
*** rossella_s has joined #openstack-meeting05:16
*** rossella_s has quit IRC05:22
*** rossella_s has joined #openstack-meeting05:24
*** rossella_s has quit IRC05:34
*** rossella_s has joined #openstack-meeting05:34
*** rossella_s has quit IRC05:44
*** zaneb has quit IRC05:45
*** rossella_s has joined #openstack-meeting05:50
*** zaneb has joined #openstack-meeting05:50
*** awaugama has quit IRC05:56
*** rossella_s has quit IRC06:01
*** Luzi has joined #openstack-meeting06:02
*** rossella_s has joined #openstack-meeting06:03
*** rossella_s has quit IRC06:03
*** rossella_s has joined #openstack-meeting06:04
*** slaweq has joined #openstack-meeting06:11
*** slaweq has quit IRC06:16
*** yamamoto has quit IRC06:27
*** yamamoto has joined #openstack-meeting06:27
*** d0ugal has joined #openstack-meeting06:32
*** Emine has quit IRC06:36
*** dims has quit IRC06:38
*** dims has joined #openstack-meeting06:44
*** dims has quit IRC06:48
*** dims has joined #openstack-meeting06:51
*** tetsuro has quit IRC06:53
*** e0ne has joined #openstack-meeting07:00
*** rcernin has quit IRC07:01
*** pcaruana has joined #openstack-meeting07:01
*** awaugama has joined #openstack-meeting07:01
*** ykatabam has quit IRC07:01
*** ykatabam has joined #openstack-meeting07:04
*** slaweq has joined #openstack-meeting07:06
*** aloga has quit IRC07:15
*** aloga has joined #openstack-meeting07:15
*** kopecmartin|off is now known as kopecmartin|ruck07:17
*** rockyg has quit IRC07:20
*** mpiwowarczy has quit IRC07:32
*** psachin has joined #openstack-meeting07:35
*** tetsuro has joined #openstack-meeting07:38
*** slaweq has quit IRC07:39
*** ralonsoh has joined #openstack-meeting07:40
*** slaweq has joined #openstack-meeting07:48
*** priteau has joined #openstack-meeting07:51
*** iyamahat has joined #openstack-meeting07:51
*** d0ugal has quit IRC07:53
*** zaneb has quit IRC08:12
*** zaneb has joined #openstack-meeting08:12
*** Luzi has quit IRC08:28
*** Luzi has joined #openstack-meeting08:40
*** ttsiouts has joined #openstack-meeting08:51
*** aojea has joined #openstack-meeting08:53
*** ttsiouts has quit IRC09:00
*** ttsiouts has joined #openstack-meeting09:02
*** hyunsikyang has joined #openstack-meeting09:12
*** slaweq has quit IRC09:19
*** yamamoto has quit IRC09:19
*** yamamoto has joined #openstack-meeting09:20
*** slaweq has joined #openstack-meeting09:21
*** yamamoto has quit IRC09:24
*** yamamoto has joined #openstack-meeting09:42
*** electrofelix has joined #openstack-meeting09:45
*** psachin has quit IRC09:59
*** longkb has quit IRC10:00
*** ttsiouts has quit IRC10:01
*** ttsiouts has joined #openstack-meeting10:03
*** yamamoto has quit IRC10:40
*** yamamoto has joined #openstack-meeting10:40
*** d0ugal has joined #openstack-meeting11:01
*** ttsiouts has quit IRC11:03
*** ttsiouts has joined #openstack-meeting11:03
*** jamesmcarthur has joined #openstack-meeting11:06
*** ttsiouts has quit IRC11:08
*** e0ne has quit IRC11:10
*** jamesmcarthur has quit IRC11:11
*** ttsiouts has joined #openstack-meeting11:13
*** tetsuro has quit IRC11:18
*** tbarron has joined #openstack-meeting11:24
*** ttsiouts has quit IRC11:25
*** bobh has joined #openstack-meeting11:49
*** e0ne has joined #openstack-meeting11:50
*** bobh has quit IRC11:53
*** dtrainor has joined #openstack-meeting11:54
*** ssbarnea|bkp has joined #openstack-meeting12:04
*** yamamoto has quit IRC12:07
*** yamamoto has joined #openstack-meeting12:08
*** ttsiouts has joined #openstack-meeting12:10
*** yamamoto has quit IRC12:12
*** mschuppert has joined #openstack-meeting12:15
*** ykatabam has quit IRC12:16
*** beagle is now known as beagles12:16
*** Emine has joined #openstack-meeting12:23
*** haleyb has quit IRC12:23
*** ssbarnea|bkp has quit IRC12:26
*** ssbarnea|bkp2 has joined #openstack-meeting12:27
*** longkb has joined #openstack-meeting12:27
*** _hemna has quit IRC12:27
*** haleyb has joined #openstack-meeting12:27
*** Luzi has quit IRC12:29
*** _pewp_ has quit IRC12:30
*** _pewp_ has joined #openstack-meeting12:31
*** rossella_s has quit IRC12:32
*** raildo has joined #openstack-meeting12:32
*** markvoelker has joined #openstack-meeting12:32
*** rossella_s has joined #openstack-meeting12:34
*** raildo_ has joined #openstack-meeting12:34
*** Bhujay has joined #openstack-meeting12:36
*** raildo has quit IRC12:37
*** jamesmcarthur has joined #openstack-meeting12:40
*** Luzi has joined #openstack-meeting12:43
*** mriedem has joined #openstack-meeting12:46
*** yamamoto has joined #openstack-meeting12:51
*** longkb has quit IRC12:53
*** tssurya has joined #openstack-meeting12:53
*** longkb has joined #openstack-meeting12:56
*** longkb has quit IRC12:56
*** pcaruana has quit IRC12:57
*** d0ugal has quit IRC13:00
*** jamesmcarthur has quit IRC13:02
*** Luzi has quit IRC13:06
*** bobh has joined #openstack-meeting13:08
*** awaugama has quit IRC13:09
*** rbudden has joined #openstack-meeting13:14
*** d0ugal has joined #openstack-meeting13:21
*** Luzi has joined #openstack-meeting13:23
*** ttsiouts has quit IRC13:23
*** rbudden has quit IRC13:23
*** pcaruana has joined #openstack-meeting13:27
*** dustins has joined #openstack-meeting13:27
*** rbudden has joined #openstack-meeting13:29
*** rossella_s has quit IRC13:30
*** e0ne has quit IRC13:32
*** ttsiouts has joined #openstack-meeting13:33
*** rossella_s has joined #openstack-meeting13:34
*** dangtrinhnt_x has joined #openstack-meeting13:34
*** d0ugal has quit IRC13:36
*** wznoinsk has joined #openstack-meeting13:38
*** tpsilva has joined #openstack-meeting13:39
*** raildo_ is now known as raildo13:39
*** awaugama has joined #openstack-meeting13:52
*** awaugama has quit IRC13:52
*** awaugama has joined #openstack-meeting13:53
*** d0ugal has joined #openstack-meeting13:54
*** hongbin has joined #openstack-meeting13:57
*** dustins has quit IRC13:58
*** ttsiouts has quit IRC13:58
*** mriedem has quit IRC14:00
*** dustins has joined #openstack-meeting14:01
*** e0ne has joined #openstack-meeting14:02
*** amotoki_ is now known as amotoki14:04
*** mriedem has joined #openstack-meeting14:05
*** cloudrancher has quit IRC14:07
*** cloudrancher has joined #openstack-meeting14:08
*** ttsiouts has joined #openstack-meeting14:12
*** Luzi has quit IRC14:19
*** munimeha1 has joined #openstack-meeting14:24
*** liuyulong has joined #openstack-meeting14:29
*** liuyulong has left #openstack-meeting14:41
*** ttsiouts has quit IRC14:46
*** Emine has quit IRC14:55
*** jamesmcarthur has joined #openstack-meeting14:55
*** dangtrinhnt_x has quit IRC14:56
*** ttsiouts has joined #openstack-meeting15:00
*** toabctl has joined #openstack-meeting15:03
*** yamamoto has quit IRC15:10
*** yamamoto has joined #openstack-meeting15:10
*** d0ugal has quit IRC15:12
*** d0ugal has joined #openstack-meeting15:12
*** Leo_m has joined #openstack-meeting15:15
*** e0ne has quit IRC15:19
*** ircuser-1 has joined #openstack-meeting15:25
*** bobh has quit IRC15:31
*** ttsiouts has quit IRC15:42
*** macza has joined #openstack-meeting15:43
*** felipemonteiro has joined #openstack-meeting15:44
*** macza_ has joined #openstack-meeting15:47
*** gyee has joined #openstack-meeting15:49
*** macza has quit IRC15:51
*** Sigyn has quit IRC15:56
*** manjeets_ has joined #openstack-meeting15:57
*** Sigyn has joined #openstack-meeting15:57
*** mlavalle has joined #openstack-meeting16:00
slaweq#startmeeting neutron_ci16:00
openstackMeeting started Tue Oct  2 16:00:26 2018 UTC and is due to finish in 60 minutes.  The chair is slaweq. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: neutron_ci)"16:00
openstackThe meeting name has been set to 'neutron_ci'16:00
slaweqlets wait few minutes for haleyb, njohnston and others16:01
mlavalleno problem16:02
slaweqok, I think we can start now16:02
slaweq#topic Actions from previous meetings16:03
*** openstack changes topic to "Actions from previous meetings (Meeting topic: neutron_ci)"16:03
slaweqmlavalle will work on logstash query to find if issues with test_attach_volume_shelved_or_offload_server still happens16:03
mlavalleI did work on that16:03
mlavalleI watched it over the past 7 days16:04
mlavalleas of today no hits whatsoever16:04
mlavallelast hit was https://review.openstack.org/#/c/60133616:04
mlavalleand as you can see, the problem is the patch16:04
mlavallethat test creates an instance and the ssh's to it16:05
mlavalleif we mess something in a patch, sometime we break ssh16:05
mlavalleand that's what we see16:05
mlavalleso it's not a bug16:05
slaweqok, so I think we can stop work on this one, at least for now16:06
slaweqI hope it will not back in the future :)16:06
mlavalleI'll preseve the query in case we need it in the future16:06
slaweqthx mlavalle for checking that16:06
slaweqnext one:16:06
slaweqnjohnston will debug why fullstack-py36 job is using py3516:06
*** aojea has quit IRC16:06
njohnstonI think bcafarel beat me to the punch a bit on this one16:07
*** michael-beaver has joined #openstack-meeting16:07
njohnstonbut I think more change is needed16:08
njohnstonspecifically I need to check this, but my impression was that you had to use the ubuntu-bionic nodeset to get a host that had 3.6 installed16:08
njohnstonso I16:09
slaweqit looks that there is not py36 in xenial indeed: http://logs.openstack.org/49/604749/1/check/neutron-fullstack-python36/490122b/job-output.txt.gz#_2018-09-24_12_36_02_77860616:09
njohnstonso I will update that and see if we have more success with that nodeset16:09
bcafarelthe review is all yours njohnston :)16:09
njohnstonI'll get that done during the meeting16:10
slaweqmaybe we should just do it as fullstack-python3 and not specify exact version of python used in tests?16:10
njohnstonI would be all in favor of that... except that it is known that python 3.7 includes some non-backwards-compatible changes that may break things16:11
njohnstonat least that is what I heard from the infra guys16:11
njohnstonthat is my only reservation16:11
slaweqyep, but for now py37 is even not released, right?16:12
*** Bhujay has quit IRC16:12
slaweqif it will be and we will have to test on distros with this version, we will add new job then16:12
njohnstonIt was released 2018-06-27 https://www.python.org/downloads/release/python-370/16:12
bcafarelnot in current CI distros but it will be there at some point16:12
slaweqnow, many people still uses xenial for e.g. development and locally to run fullstack tests (me for example) and it will not work for me if we specify exactly python36 in tox ini, right?16:13
slaweqok, it's released but is it in any distro which we support?16:14
njohnstonThat is a good point.  So we should keep the old tox target around so people can test locally.16:14
bcafarelisn't the idea to leave generic python3 in tox.ini and set specific versions in gates/zuul?16:15
slaweqbcafarel: I think that this would be better than specify exact version in tox.ini16:16
njohnstonbcafarel: I can alter your change to do that; right now it calls out 3.6 specifically: https://review.openstack.org/#/c/604749/1/tox.ini16:16
njohnstonbcafarel: but once we switch to bionic probably we can omit the dot-version16:16
slaweqmaybe we should consider to just replace neutron-fullstack-python36 job with neutron-fullstack job simply and run it with python316:17
slaweqwhat You think about that?16:17
mlavalleyes, that makes sense16:17
mlavallethe exeception now python 2.716:17
mlavallewe should get used to it16:18
njohnstonok, I will change things so that we have a neutron-fullstack job that calls python3 in tox.ini but that runs on bionic nodeset.  Is that an accurate capturing of the consensus?16:18
bcafarelsince we will drop the py2 one soon, makes sense indeed16:19
slaweqnjohnston: for me yes, if bionic is the release which OpenStack Stein will support of course :)16:19
*** toabctl has quit IRC16:19
* slaweq don't knows exactly what versions of Ubuntu should be supported in Stein16:20
njohnstonslaweq: IIUC it is, I can get clarification on that from... I don't know who, I guess the infra guys.16:20
slaweqok, so let's do that and I hope it will work fine16:20
slaweqok, thx njohnston16:21
mlavallenjohnston: infra guys are a good source16:21
njohnstonIf infra says that Stein needs to work on both bionic and xenial then we may need two jobs in a few cases16:21
njohnston#action njohnston Reduce to a single fullstack job named 'neutron-fullstack' that calls python3 in tox.ini and runs on bionic nodeset.16:21
slaweqthx njohnston for writing action, I just wanted to do it :)16:21
njohnston#action njohnston Ask the infra team for what ubuntu release Stein will be supported16:22
slaweqok, let's move on to the next one16:22
slaweqnjohnston will send a patch to remove fullstack py27 job completly16:22
njohnstonThat is in, but it depends on the change to fix the fullstack 3.6 job16:23
slaweqok, in fact it will be done together if You replace python version to py3 in this job :)16:24
njohnstonSo it has been sitting there.  Given the previous discussion, I'll probably abandon it since we'll be changing the 2.7 job to 3.x instead16:24
slaweqok, lets move to the next topic then16:24
slaweq#topic Grafana16:24
*** openstack changes topic to "Grafana (Meeting topic: neutron_ci)"16:24
slaweqQuick info16:24
slaweqI have 2 patches:16:24
slaweqhttps://review.openstack.org/#/c/607285/ - to update fullstack-python35 to py36, but it will be probably changed/abandoned as we will change those jobs16:25
slaweqand second: https://review.openstack.org/#/c/583870/ to add tempest-slow job to grafana which we are now missing in dashboard16:25
*** rfolco has quit IRC16:26
slaweqalso, speaking about grafana I noticed that we are probably missing some other jobs also, e.g. openstack-tox-py3616:26
njohnstonslaweq: I also had a change to update fullstack-python35 to 36, I'll abandon it as well: https://review.openstack.org/#/c/605128/16:26
slaweqis there any volunteer to check that and add missing jobs to grafana?16:26
slaweqnjohnston: thx :)16:27
slaweq#action njohnston to check and add missing jobs to grafana dashboard16:27
slaweqahh, I forgot about link16:28
njohnstonFYI, answer from fungi on the previous question: "the infra team isn't really in charge of making it happen, but https://governance.openstack.org/tc/reference/project-testing-interface.html#linux-distributions is interpreted to mean that projects should be switching their master branches to use ubuntu-bionic instead of ubuntu-xenial for jobs soon so they work out any remaining kinks before stein releases"16:28
slaweqnjohnston: thx for checking that, so we should be good to switch fullstack to py36 and bionic16:28
fungiyep, one thing we probably want to consider is whether this should be treated like a de facto cycle goal16:28
* slaweq wonders how many new issues we will have on new Ubuntu release :)16:29
fungi(making sure projects are testing against bionic instead of xenial on master before some point ni this cycle)16:29
mlavallefungi: so are we switching in this cycle?16:29
mlavalleor just preparing for the switch16:29
fungimlavalle: our governance documents state we test on the latest ubuntu lts release16:30
fungiso if we're not, we either need to fix that or update governance16:30
fungipreviously we've considered it reasonable to stick with whatever the latest ubuntu lts version was at the start of the cycle, which for rocky was ubuntu xenial16:31
fungiubuntu bionic was released partway into the rocky cycle, so we stuck with xenial through rocky but should switch to bionic in stein16:32
slaweqso we should switch all our jobs to bionic in this cycle, right?16:32
mlavallethanks for the clarification16:32
fungiideally yes16:32
slaweqok, thx fungi16:32
bcafarelthat would be in line with community goal py3.6 tests too (they need a distro with it)16:32
fungiduring the ubuntu trusty to xenial transition trove ended up with a lot of their jobs still on trusty after other projects released using xenial and that made integration testing really challenging for them. ultimately they needed to update their stable branch testing to xenial which was nontrivial16:33
slaweqso I can prepare some etherpad with list of jobs which we should switch and we will track progress there, what You think?16:34
mlavallethat's a good idea16:35
mlavallewe can review it weekly in this meeting16:35
slaweqyes :)16:35
njohnstonfungi: I was going to write an email to the ML with the neutron steadium as the intended audience, but as a member of the TC perhaps this is something that should come from that level, at least to remind everyone to start making plans for an orderly transition16:36
slaweq#action slaweq to prepare etherpad with list of jobs to move to Bionic16:36
njohnstons/as a member of the TC/since you are a member of the TC/16:36
slaweqok, moving on?16:38
njohnstonyes go ahead16:38
slaweqso getting back to grafana, we have some issues with I want to talk about but generally it's not very bad16:39
slaweqwhat is worth to mention is fact that tempest jobs and scenario jobs are doing quite good last week16:39
mlavallethat's great!16:40
slaweqso let's talk with some issues which we have there16:40
slaweq#topic fullstack/functional16:40
*** openstack changes topic to "fullstack/functional (Meeting topic: neutron_ci)"16:40
* mlavalle flinches for the but....16:40
slaweqThere is many errors like reported in https://bugs.launchpad.net/neutron/+bug/1795482 recently in fullstack16:41
openstackLaunchpad bug 1795482 in neutron "Deleting network namespaces sometimes fails in check/gate queue with ENOENT" [High,In progress] - Assigned to Brian Haley (brian-haley)16:41
slaweqand since new pyroute2 version was released (0.5.3) there is issue like in https://bugs.launchpad.net/neutron/+bug/179554816:42
openstackLaunchpad bug 1795548 in neutron "neutron.tests.functional.agent.l3.test_legacy_router.L3AgentTestCase.test_legacy_router_lifecycle* fail" [High,In progress] - Assigned to Brian Haley (brian-haley)16:42
slaweqI mention both of them together as haleyb already proposed one patch for both: https://review.openstack.org/#/c/607009/16:42
slaweqso we should be good after this will be merged16:42
slaweqanything else related to fullstack/functional jobs?16:43
* mlavalle will review today16:44
slaweqthx mlavalle16:44
slaweqif there is nothing else, lets go to the next one16:45
slaweq#topic Tempest/Scenario16:45
*** openstack changes topic to "Tempest/Scenario (Meeting topic: neutron_ci)"16:45
slaweqPatch https://review.openstack.org/#/c/578796/ waits for review - it's last job which is now done using legacy zuul templates,16:45
slaweqplease review it :)16:45
*** _hemna has joined #openstack-meeting16:46
* mlavalle added it to his pile16:47
slaweqthx mlavalle16:47
slaweqand one more thing, recently I saw few times that one of trunk tests is failing still16:47
slaweqfor example: One of trunk tests is failing from time to time, example16:47
slaweqso my fix for long bash command helped but there is something else also still16:48
njohnstonslaweq: Since no more legacy jobs, does that mean that playbooks/legacy will go away?16:48
*** manjeets_ has quit IRC16:48
slaweqnjohnston: maybe I forgot to remove it :/16:49
njohnstonit'll be a satisfying follow-on change16:51
slaweqnjohnston: ok, thx16:51
bcafarellooks good git-checking out the review, legacy/ went away here16:51
bcafareloh it's in neutron16:52
slaweqspeaking about trunk test issue, I will open new bug for it but I don't know if I will have time to investigate it16:52
mlavalleI can take it16:52
slaweq#action slaweq will report bug about failing trunk scenario test16:52
slaweqok, thx mlavalle16:52
mlavalleplease ping once you file the bug16:52
slaweq#action mlavalle will check failing trunk scenario test16:53
slaweqmlavalle: sure16:53
slaweqthx a lot for help16:53
slaweqok, moving on to the next one16:53
slaweq#topic grenade16:53
*** openstack changes topic to "grenade (Meeting topic: neutron_ci)"16:53
slaweqwe still have this issue with grenade-multinode-dvr job16:53
slaweqwhat I observed yesterday was that this instance created by grenade test started pinging after around 240-250 seconds after it was created16:54
njohnstonwow, that is pretty long16:54
mlavalle4 minutes16:54
slaweqtoday I'm trying to add some additional logs to this job to check iptables/openflow/arp tables on both nodes and in all namespaces but I didn't do it yet properly16:55
slaweqyes, it's around 4 minutes and I don't know why it's like that16:55
slaweqI will continue work on this one16:56
slaweqok, so that's all from me16:57
mlavalleI have one comment16:57
mlavalleboden is trying to add a vmware-nsx job to checking queue. But for non statium projects we said 3rd party CY, right?16:58
slaweqI think it would be good to add it as 3rd party job16:58
slaweqmunimeha1: hi16:58
munimeha1for tap as service inclusion to Neutron16:58
munimeha1 require CI for Grenade coverage16:58
munimeha1what need to be done16:58
njohnstonWe did, and I think the assumption was that we wouldn't have hardware to properly check 3rd party (i.e. no NSX switches).16:59
slaweqmunimeha1: I think we are out of time now, can You ask about it on neutron-channel?17:00
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"17:00
openstackMeeting ended Tue Oct  2 17:00:26 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)17:00
openstackMinutes:        http://eavesdrop.openstack.org/meetings/neutron_ci/2018/neutron_ci.2018-10-02-16.00.html17:00
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/neutron_ci/2018/neutron_ci.2018-10-02-16.00.txt17:00
njohnstonAll we could reasonably do in Zuul is cross-check the more basic things like unit tests, which would probably not be too horrible17:00
slaweqthx for attending17:00
openstackLog:            http://eavesdrop.openstack.org/meetings/neutron_ci/2018/neutron_ci.2018-10-02-16.00.log.html17:00
*** iyamahat has quit IRC17:01
*** mlavalle has left #openstack-meeting17:02
*** rf0lc0 has joined #openstack-meeting17:12
*** rossella_s has quit IRC17:12
*** rossella_s has joined #openstack-meeting17:12
*** jamesmcarthur has quit IRC17:13
*** jamesmcarthur has joined #openstack-meeting17:18
*** cloudrancher has quit IRC17:24
*** cloudrancher has joined #openstack-meeting17:25
*** iyamahat has joined #openstack-meeting17:26
*** jamesmcarthur has quit IRC17:27
*** rossella_s has quit IRC17:30
*** rossella_s has joined #openstack-meeting17:34
*** Emine has joined #openstack-meeting17:43
*** electrofelix has quit IRC17:45
*** munimeha1 has quit IRC17:47
*** priteau has quit IRC17:47
*** diablo_rojo has joined #openstack-meeting17:48
*** tssurya has quit IRC17:52
*** pcaruana has quit IRC18:02
*** zaneb has quit IRC18:13
*** zaneb has joined #openstack-meeting18:13
*** Emine has quit IRC18:15
*** gary_perkins has quit IRC18:15
*** bobh has joined #openstack-meeting18:18
*** imacdonn has quit IRC18:21
*** imacdonn has joined #openstack-meeting18:21
*** zaneb has quit IRC18:27
*** zaneb has joined #openstack-meeting18:28
*** AJaeger has joined #openstack-meeting18:53
*** ralonsoh has quit IRC18:56
fungianybody want to have an infra meeting?19:02
fungii guess i can chair, but if i'm the only one around there may be little point to it19:02
AJaegerI'm here but it feels kind of lonely, fine with skipping19:02
*** Shrews has joined #openstack-meeting19:02
*** Swami has joined #openstack-meeting19:03
AJaegerfungi, it's just us, let's cancel...19:05
fungino infra team meeting this week, but if you happen to be in austin, tx right now go check out the zuul booth at ansiblefest!19:06
fungii am not in austin, but i've seen photos. it looks awesome19:07
*** AJaeger has left #openstack-meeting19:11
*** awaugama has quit IRC19:31
*** e0ne has joined #openstack-meeting19:32
*** Shrews has left #openstack-meeting19:33
*** diablo_rojo has quit IRC20:00
*** dustins has quit IRC20:16
*** ykatabam has joined #openstack-meeting20:31
*** ykatabam has quit IRC20:32
*** ykatabam has joined #openstack-meeting20:32
*** cloudrancher has quit IRC20:41
*** oneswig has joined #openstack-meeting20:49
*** trandles has joined #openstack-meeting20:55
*** ttsiouts has joined #openstack-meeting20:55
*** raildo has quit IRC20:58
*** bobh has quit IRC20:58
*** cloudrancher has joined #openstack-meeting20:59
*** janders has joined #openstack-meeting20:59
*** bobh has joined #openstack-meeting21:00
oneswig#startmeeting scientific-sig21:00
openstackMeeting started Tue Oct  2 21:00:26 2018 UTC and is due to finish in 60 minutes.  The chair is oneswig. Information about MeetBot at http://wiki.debian.org/MeetBot.21:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.21:00
*** openstack changes topic to " (Meeting topic: scientific-sig)"21:00
openstackThe meeting name has been set to 'scientific_sig'21:00
oneswigI made it.  Greetings all.21:00
jandersg'day oneswig :)21:00
oneswigHey janders!21:00
jandersdo you have any experience with NVME-oF in the OpenStack context?21:01
trandleshello all21:01
janders(cinder/nova/swift/... backend)21:01
oneswigooh, straight down to it :-)21:01
oneswighey trandles!21:01
jandersyes! :)21:01
oneswigNot direct experience of NVMEoF21:01
*** e0ne has quit IRC21:02
oneswigkeep looking for a good opportunity.21:02
oneswigjanders: what's come up?21:02
oneswigIs this in bare metal or virt?21:02
jandersI was chatting to Moshe and the Intel guys about it in Vancouver and they said that 1) the cinder driver is ready and will be supported in Pike but 2) they haven't really gone beyond a scale of a single NVMe device with their testing (eg no NVMe-oF arrays etc)21:03
jandersnow I was chatting to my storage boss colleague about specing out a backend for the cyber system and he pulled out some array type options that do do NVMe-oF21:03
oneswigI recently saw PR from Excelero - a German cloud (teuto.net?) have built a substantial data platform upon it.21:04
jandersfound it very interesting - I didn't know anyone offers NVMe-of appliances just yet21:04
jandershe quoted E821:04
oneswigAh yes, I've heard of them too.21:04
*** bobh has quit IRC21:04
oneswigNot sure on the unique selling points to distinguish them (I am sure there are some)21:05
*** Qiming has quit IRC21:05
jandersyou asked what is this for - our cyber system (which will do both baremetal and VM)21:05
oneswignew project?21:06
jandersno 1) use case would be persistent volumes, but the more OpenStack services can use the backend the better21:06
janderslong story. Short version is what I was talking about in Vancouver has been split up in few parts and this is in the context of one of them21:06
oneswigI've previously used iSER in a similar context.  It worked really well for large (>64K) transfers, was pretty lame below that.  This was serving Cinder volumes via RDMA to VMs.21:07
*** jmlowe has joined #openstack-meeting21:07
janderswas that IB or RoCE?21:07
jmloweHi everybody21:07
oneswigRoCE in this case.21:07
oneswigHey Mike - good to see you21:07
jandersok! that is encouraging - and important given that quite a few NVME-oF players do go RoCE and not IB21:07
*** martial has joined #openstack-meeting21:08
jandersand on a related but slightly different front - if you don't mind me asking21:08
oneswigAre you thinking of buying it in or building yourself?21:08
jandersfor your ironic ethernet+IB system, what ethernet NICs do you use?21:08
jandersare those Mellanox or non-Mellanox?21:08
oneswigMellanox ConnectX4 VPI works well - dual 100G, 1 Ethernet, 1 IB21:09
oneswigAlthough in this specific case the Ethernet was down-clocked to 25G21:09
jandersdid you need to do anything to make sure that the Ethernet port doesn't steal bandwidth away from IB?21:09
jandersreading my mind :)21:09
oneswigNo, nothing major21:09
janderswas the 25G cap due to PCIe3x16 bandwidth limitations?21:10
oneswigIf you want to use Secure Host for the multi-tenant IB, I believe you're stuck with ConnectX3-pro - unless you know otherwise?21:10
*** ttsiouts has quit IRC21:10
jandersall my POC nodes are either 2xFDR or 4xFDR so haven't had the PCIe3x16 problem just yet21:10
oneswigjanders: no, switch budget, and I think it was considered more representative of the use case requirements21:11
*** ttsiouts has joined #openstack-meeting21:11
jandersyou are right regarding Secure Host - I hope to work this out with Erez and his team21:11
jandersdo you know if mlnx have any good reason NOT to implement SecureHost for CX4 and above?21:11
jandersor is it that no one kept asking persistently enough? :)21:12
oneswigI don't know.21:12
jandersI will have a go at this soon :) hopefully I will get something out and it will be useful to you guys too21:12
oneswigI think your theory's good.21:12
*** cloudrancher has quit IRC21:12
jandersregarding the appliance vs roll your own for NVME-oF - we'll consider both approaches21:12
jandersthe idea of hyperconverged storage in a bare-metal cloud context is quite interesting actually, but I think this is a potentially endless topic21:13
jandersfirst we discouraged it as not feasible but it came back and it might actually work21:13
oneswigOur thinking has been more towards disaggregated storage recently.21:14
oneswigPossibly a single block device hyperconverged - to keep the resource footprint low on the compute nodes.21:15
oneswigMight be interesting if also nailed down with cgroups21:15
janderswe're not there yet - but in a fully containerised context where the whole deployment is baremetal openstack running k8s, it could be just as simple as it used to be in the hypervisor centric reality21:16
*** Qiming has joined #openstack-meeting21:16
*** Emine has joined #openstack-meeting21:16
oneswigWe did a little testing on resource footprint with hyperconverged (non-containerised) on bare metal.  I think the containers could work.21:16
oneswigOn the extreme bandwidth topic, one of our team has been getting really great numbers from BeeGFS at scale recently.  Servers with NVME and dual OPA exceeding 21GBytes/s for reads.  Yeee-ha21:18
oneswigAnd there are lots of them in this setup too...21:18
jandersour BeeGFS is being built (after some shipping delays). It's 24x NVMe and 2xEDR per box so hoping for some good numbers, too21:19
oneswigThere was talk of offering this resource in a NVMEoF mode, but it's secondary to scratch BeeGFS filesystems21:19
jandersagain our thinking is quite closely aligned :)21:19
oneswigSounds pretty similar, in which case it should fly21:19
jandersour guys are thinking 1) build up the BeeGFS on half of the nodes and see how hard the users will push it21:20
jandersif it's not being pushed too hard, we can use some of the NVMes for NVMe-oF, BeeGFS-OND etc21:20
jmloweIs BeeGFS taking root in Europe?  I'm not aware of any deployments in the US21:20
*** awaugama has joined #openstack-meeting21:21
oneswigjmlowe: It seems to be gaining ground over here.  Some people got shaken with recent events in Lustre I guess21:21
jandersgood BeeGFS-OpenStack integration would be an amazing thing to have21:21
oneswigThe performance we've seen from it is as good or better than Lustre, on the same hardware.21:22
oneswigjanders: Manila! Lets do this!21:22
jandersI don't really have much experience with non-IB BeeGFS deployments, but I do know it's quite flexible so it could be quite useful outside HPC too21:22
janders+1! :)21:22
*** diablo_rojo has joined #openstack-meeting21:22
jandersthe tricky bit is no kerberos support (or prospects for it) last time I checked21:22
jmloweI'm really happy with nfs-ganesha/cephfs/manila so far, going to try to start rolling it out21:23
jandersthis can be worked around, but it does make life much harder21:23
jmloweso all you'd really need to get bootstrapped is a BeeGFS FASL for ganesha21:23
oneswigjmlowe: I'd be interested to hear how that flies when it's under heavy load, really interested.21:24
jmlowebenchmarking from a single node it's faster than native cephfs for smaller files21:24
oneswigThis week we've been experimenting with a kubernetes CSI driver for BeeGFS but it's really early days for that.21:24
oneswigjmlowe: an extra layer of caching at play there?  Any theories?21:25
jmlowethat was my go to theory21:25
oneswigThat's pretty interesting.  Does Ganesha run in the hypervisor or separate provisioned nodes?21:27
*** ttsiouts has quit IRC21:27
*** ttsiouts has joined #openstack-meeting21:28
oneswigso what's new with Jetstream jmlowe?21:28
oneswigI've had a frustrating day debugging bizarre stubbornness with getting Heat to do more things21:31
*** bobh has joined #openstack-meeting21:31
janderswhat specifically?21:31
jandersheat is always "fun" (even though tight integration with all the other APIs is pretty cool)21:31
trandles+1 frustrating day...but for non-fun non-technical reasons21:31
oneswigCreating resource groups of nodes with parameterised numbers of network interfaces.21:32
oneswigtrandles: feel your pain.21:32
oneswigDid you want to mention those open positions?21:32
jandersif I remember correctly, that was non-trivial even in ansible..21:32
oneswig(somewhat select audience here, mostly of non-US nationals...)21:33
trandlesI need to get it up on the openstack jobs board21:33
jmloweWe are getting some new IU only hardware it looks like, 20 v100's and 16 of dell's latest blades21:33
oneswigjanders: they appear to share an issue with handling parameters that are yaml-structured compound data objects21:33
*** felipemonteiro has quit IRC21:33
oneswigtrandles: the jobs board seems to work21:34
jmloweI need to figure out a good way to partition so that NSF users only get the hardware the NSF purchased and IU users get only the hardware IU purchased21:34
trandleswe've done zero advertising and are pulling in resumes21:34
trandlesso fingers crossed we get the right candidate21:34
oneswigjmlowe: there are ways and means for that.21:35
jmloweI was thinking via private flavors and host aggregates21:35
trandlesjmlowe: I hate that hard partitioning problem.  We almost always try to argue "you'll get the equivalent amount of time for the dollars you provided" but then end up in arguments about normalization21:35
oneswigI forget the current best practice, but linking project flavors and custom resources seems to be in fashion21:35
jmloweIt's a generation newer cpu, so they really are different, skylake vs haswell I think21:36
jandersjmlowe: do you need to worry about idle resources in this scenario, or is the demand greater than the supply across the board?21:37
oneswigjmlowe: what will you do for network fabric between those V100s?21:38
jmlowenot terribly concerned about idle resources21:38
jandersbriefly jumping back to the SDN discussion - do you guys know if it's possible to have a single port virtual function on a dual port CX4/5/6?21:39
oneswigHad a question on this - does anyone have experience on mixing SR-IOV with bonded NICs?  Is there a way to make that work?21:39
jmlowelooking at nvidia grid and slicing n dicing them, let's say to run a workshop on the openmp like automagical gp-gpu compiler thingy whose name escapes me right now21:39
oneswigjanders: snap :-)21:40
*** Leo_m has quit IRC21:40
jmloweI would think you would need to do the bonding inside the vm in order for SR-IOV to work with bonding21:40
jandersoneswig: I think Mellanox do have some clever trick to do in-hardware LACP in CX5 / Ethernet. But that's me repeating their words, I haven't tested that21:40
trandlesSR-IOV and bonded NICs sounds like evil voodoo21:41
oneswigtrandles: it does indeed.21:41
jmloweif you consider SR-IOV is just multiplexed pci passthrough then it doesn't sound so scary21:41
trandlesbut across multiple physical devices?21:42
oneswigtrandles: there's two scenarios - 2 nics or 1 dual-port nic - perhaps the second might work, the first, I am not confident21:43
jandersslide 1021:43
jandersI think this is what Erez was talking about21:43
oneswigNo way - I was there :-)21:44
trandlesdual-port NICs sounds like something manageable...multiple NICs sounds like driver hell21:44
jandersI wasn't but got a good debrief in Vancouver21:44
jandershmm now I wonder if I should ask for 1xHDR and 1xdual port EDR  for my new blades :)21:45
jmlowetrandles never had fun in the bad old days altering a nic's mac address to defeat mac address filtering21:45
janders..or even 2xEDR and 2xHDR all active/passive for maximum availability21:46
oneswigjanders: might push out the delivery date, beyond an event horizon21:46
jandersoneswig: this is a very valid point21:46
*** bobh has quit IRC21:47
jandersI do have some kit that can be used in the interim, but perhaps retrofitting HDR later isn't a bad idea21:47
*** bobh_ has joined #openstack-meeting21:47
jmloweDid live migration with melanox sr-iov ever come to fruition?21:47
jandersnot that I know of21:47
oneswigDK Panda presented something on this21:48
jandershowever cold migration is now supposed to work (historically wasn't the case)21:48
oneswigBut it's never clear if his work will be generally available (or when)21:48
jmlowedamn, that would have been killer, DL Panda hinted at it during the 2017 HPC Advisory Council21:48
jmlowewell, that's something at least21:48
oneswigHe came back with numbers, just no github URL (as far as I remember)21:48
oneswigjmlowe: if you look for his video from Lugano 2018, he covered some results.  It's possible he was presenting on using RDMA to migrate the VM, I don't recall exactly.21:50
*** janders_ has joined #openstack-meeting21:50
oneswigtrandles: we were talking Charliecloud over here the other day.  Any news from the project?21:51
jmloweThat's an entirely different beast, no to say it's not useful for something like postcopy live migration, but it's not what I was promised with my flying car21:51
oneswigjmlowe: transporting your physical person, I wouldn't trust RDMA for that :-)21:52
janders_cmon... isn't IB loseless ;)?21:52
*** janders has quit IRC21:52
trandlesCharliecloud: Reid has been kicking ass on implementing some shared object voodoo to get MPI support automagically inside a containerized application on both OpenMPI installations and Cray MPICH21:52
jmlowesee star trek quote21:53
*** Emine has quit IRC21:53
oneswigtrandles: this is for the kind of problems that Shifter tackles by grafting in system MPI libraries?21:54
trandleswe have also had quite a bit of success putting some large multi-physics lagrangian applications into containers and running automated workflows of parameter studies21:54
trandlesReid had a few chats with the Shifter guys (Shane and Doug) about the gotchas they found and then implemented his own method of doing something similar21:55
oneswigtrandles: do you know of people successfully using MPI + Kubernetes?  It keeps coming up here and I'm suspicious it's missing some key PMI glue21:55
trandlesMPI + Kubernetes: A Story of Oil and Water21:56
oneswigtrandles: so nearly vinaigrette21:56
trandlesnot nearly as delicious21:56
jmlowejust needs some egg21:56
trandlesI know more than I ever thought I'd know (or wanted to know) about how MPI starts up jobs21:56
jmlowemmm, sausage21:57
jmloweit's dinner time here21:57
janders_I don't think there's an easy way of plumbing RDMA into k8s managed docker containers at this point in time - please correct me if I am wrong21:57
trandleslong story short, ORTE is dead (not a joke...), PRTE is specialized and will be unsupported, PMIx is the future (maybe...)21:57
*** ijw has quit IRC21:57
trandleswhat that all means is in the near future (once OpenMPI 2.x dies off) there won't be an mpirun to start jobs21:58
trandleswhat we do is compile slurm with PMIx support and srun wires it all up21:59
trandlesk8s would need that PMIx stuff to properly start a job IMO22:00
oneswigIt's a missing piece for AI at scale, I suspect it'll happen one way or another and the deep learning crowd will drive it.22:00
oneswigtrandles: any evidence of k8s getting extended with that support?22:00
trandlesI don't follow k8s enough right now to know22:00
oneswighmmm.  OK the quest continues but that's more doors closed.22:01
trandlesfor MPI jobs, k8s runtime model feels like an anti-pattern22:01
trandlesie. when a rank dies, MPI dies22:01
oneswigAll that careful just-so placement...22:01
oneswigAlas we are out of time.22:01
oneswigfinal comments?22:01
trandlesfor things like DASK, multi-node TensorFlow, other ML/AI frameworks, k8s makes more sense22:02
janders_thanks guys, great chat!22:02
trandlesjanders_: +122:02
trandleslater everyone22:02
jmloweI'll miss most of you in Dallas22:02
oneswigjmlowe: you're not going?22:02
trandlesjmlowe: you're in Dallas or not in Dallas?22:02
jmloweI'm going, was under the impression most were going to the Berlin Summit instead22:02
oneswigIch bin eine Berliner22:03
trandlesah, ok, I'll be in Dallas Monday night, all day Tuesday...leaving early Wednesday morning22:03
jmloweand I need to buy plane tickets RSN22:03
*** mriedem has quit IRC22:03
oneswigOK y'all, until next time22:03
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"22:04
openstackMeeting ended Tue Oct  2 22:03:59 2018 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)22:04
openstackMinutes:        http://eavesdrop.openstack.org/meetings/scientific_sig/2018/scientific_sig.2018-10-02-21.00.html22:04
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/scientific_sig/2018/scientific_sig.2018-10-02-21.00.txt22:04
openstackLog:            http://eavesdrop.openstack.org/meetings/scientific_sig/2018/scientific_sig.2018-10-02-21.00.log.html22:04
*** ttsiouts has quit IRC22:04
*** oneswig has quit IRC22:05
*** trandles has quit IRC22:06
*** jesusaur has quit IRC22:06
*** bobh_ has quit IRC22:07
*** jesusaur has joined #openstack-meeting22:11
martialWas a cool meeting ;)22:12
*** michael-beaver has quit IRC22:33
*** rbudden has quit IRC22:54
*** tpsilva has quit IRC22:55
*** hongbin has quit IRC23:02
*** jamesmcarthur has joined #openstack-meeting23:05
*** _alastor_ has quit IRC23:08
*** jamesmcarthur has quit IRC23:10
*** awaugama has quit IRC23:11
*** rbudden has joined #openstack-meeting23:17
*** njohnston has quit IRC23:17
*** yamamoto has quit IRC23:21
*** yamamoto has joined #openstack-meeting23:22
*** yamamoto has quit IRC23:24
*** yamamoto has joined #openstack-meeting23:24
*** yamamoto has quit IRC23:29
*** yamamoto has joined #openstack-meeting23:30
*** yamamoto has quit IRC23:34
*** rbudden has quit IRC23:36
*** erlon has joined #openstack-meeting23:36
*** macza_ has quit IRC23:37
*** macza has joined #openstack-meeting23:37
*** rbudden has joined #openstack-meeting23:41
*** janders_ has quit IRC23:41
*** macza has quit IRC23:42
*** Swami has quit IRC23:42
*** rbudden has quit IRC23:57
*** rcernin has joined #openstack-meeting23:59

Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!