Thursday, 2020-08-27

*** artom has quit IRC00:21
*** _erlon_ has quit IRC00:25
*** baojg has quit IRC00:37
*** baojg has joined #openstack-meeting-300:39
*** bnemec has quit IRC00:41
*** bnemec has joined #openstack-meeting-300:46
*** bnemec has quit IRC01:34
*** bnemec has joined #openstack-meeting-301:39
*** apetrich has quit IRC02:26
*** apetrich has joined #openstack-meeting-302:27
*** psachin has joined #openstack-meeting-303:39
*** psahoo has joined #openstack-meeting-303:57
*** psahoo has quit IRC04:42
*** belmoreira has joined #openstack-meeting-304:53
*** psahoo has joined #openstack-meeting-304:55
*** lajoskatona has joined #openstack-meeting-305:13
*** lajoskatona has left #openstack-meeting-305:13
*** psahoo has quit IRC05:18
*** belmoreira has quit IRC05:26
*** belmoreira has joined #openstack-meeting-305:34
*** psahoo has joined #openstack-meeting-305:35
*** ralonsoh has joined #openstack-meeting-306:14
*** slaweq has joined #openstack-meeting-306:31
*** psahoo has quit IRC06:43
*** psahoo has joined #openstack-meeting-306:49
*** e0ne has joined #openstack-meeting-306:57
*** e0ne has quit IRC07:00
*** yamamoto has quit IRC07:12
*** yamamoto has joined #openstack-meeting-307:13
*** baojg has quit IRC07:28
*** baojg has joined #openstack-meeting-307:29
*** baojg has quit IRC07:30
*** baojg has joined #openstack-meeting-307:31
*** e0ne has joined #openstack-meeting-307:39
*** tosky has joined #openstack-meeting-307:47
*** baojg has quit IRC08:06
*** baojg has joined #openstack-meeting-308:07
*** yamamoto has quit IRC08:15
*** yamamoto has joined #openstack-meeting-308:25
*** yamamoto has quit IRC09:24
*** yamamoto has joined #openstack-meeting-309:53
*** yamamoto has quit IRC10:05
*** yamamoto has joined #openstack-meeting-310:36
*** yamamoto has quit IRC10:40
*** yamamoto has joined #openstack-meeting-310:40
*** yamamoto has quit IRC11:01
*** yamamoto has joined #openstack-meeting-311:12
*** yamamoto has quit IRC11:12
*** baojg has quit IRC11:19
*** baojg has joined #openstack-meeting-311:19
*** yamamoto has joined #openstack-meeting-311:35
*** artom has joined #openstack-meeting-312:25
*** psachin has quit IRC13:14
*** yamamoto has quit IRC13:23
*** bnemec has quit IRC14:11
*** bnemec has joined #openstack-meeting-314:12
*** baojg has quit IRC14:42
*** baojg has joined #openstack-meeting-314:43
*** apetrich has quit IRC14:54
*** baojg has quit IRC14:54
*** baojg has joined #openstack-meeting-314:55
*** baojg has joined #openstack-meeting-314:56
*** yamamoto has joined #openstack-meeting-315:24
*** yamamoto has quit IRC15:29
*** rambo_li has joined #openstack-meeting-315:44
*** harsha24 has joined #openstack-meeting-315:56
gibi#startmeeting nova16:00
openstackMeeting started Thu Aug 27 16:00:09 2020 UTC and is due to finish in 60 minutes.  The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot.16:00
openstackUseful Commands: #action #agreed #help #info #idea #link #topic #startvote.16:00
*** openstack changes topic to " (Meeting topic: nova)"16:00
openstackThe meeting name has been set to 'nova'16:00
dansmitho/16:00
*** elod has joined #openstack-meeting-316:00
gibio/16:00
gmanno/16:00
gibilets get started16:00
gibi#topic Bugs (stuck/critical)16:00
*** openstack changes topic to "Bugs (stuck/critical) (Meeting topic: nova)"16:00
gibiWe have a high severity CVE #link https://bugs.launchpad.net/nova/+bug/1890501 that has been fixed on master and ussuri and patches are going in to older stable branches.16:00
openstackLaunchpad bug 1890501 in OpenStack Compute (nova) stein "Soft reboot after live-migration reverts instance to original source domain XML (CVE-2020-17376)" [Critical,In progress] - Assigned to Lee Yarwood (lyarwood)16:00
bauzas\o16:01
stephenfino/16:01
gibibesides that I don't see any critical bugs16:01
gibi#link 31 new untriaged bugs (-5 since the last meeting): https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New16:01
gibi#link 6 untagged untriaged bugs (-2 change since the last meeting): https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW16:01
gibiplease look at the untriaged bug list and try to push some bugs forward16:01
gibialso I pinged some of you about specific bugs16:02
*** psahoo has quit IRC16:02
bauzasthanks gibi for scrubbing the list16:02
lyarwood\o16:02
gibido we need to talk about some of the open bugs herE?16:02
lyarwoodhttps://review.opendev.org/#/c/747358/ looks jammed btw16:02
lyarwoodnot sure what we need to do to move it into the gate16:03
lyarwoodsorry ignore that16:03
lyarwoodthought it was the CVE, too many bugs :)16:03
gibi:)16:03
gibi#topic Runways16:03
*** openstack changes topic to "Runways (Meeting topic: nova)"16:03
gibietherpad #link https://etherpad.opendev.org/p/nova-runways-victoria16:03
gibibp/provider-config-file has been merged \o/ so it's slot is freed up in the runway16:04
gibithe first patch of bp/cyborg-rebuild-and-evacuate now has +2 from me16:04
gibihe spawn part of the bp/add-emulated-virtual-tpm has been merged, the migrate and resize part has a +2 from me16:04
gibi         nothing is in the queue and we have a free runway slot, so if you have a feature ready then it is a good time to add it to the queue16:04
gibido we need to talk about the items in the runway slots?16:04
bauzasgibi: I'll look at the vtpm series again tomorrow morning16:05
gibicool16:05
gibithanks16:05
bauzasthis should potentially free yet another slot16:05
bauzas*potentially* :)16:05
gibi#topic Release Planning16:06
*** openstack changes topic to "Release Planning (Meeting topic: nova)"16:06
gibiWe have 2 weeks until Milestone 3 which is Feature Freeze16:06
gibiI've set up the release tracking etherpad #link https://etherpad.opendev.org/p/nova-victoria-rc-potential16:06
gibiNext week is the last release of non-client libraries for Victoria. For os-vif we would like to get #link https://review.opendev.org/#/c/744816/ merged before the release16:06
gibiSooner than later we have to talk about cycle highlights and reno prelude16:06
*** dustinc has quit IRC16:07
*** baojg has quit IRC16:07
gibianything else about the coming release that we have to discuss?16:07
harsha24https://blueprints.launchpad.net/nova/+spec/parallel-filter-scheduler16:07
*** baojg has joined #openstack-meeting-316:07
*** sean-k-mooney has joined #openstack-meeting-316:08
gibiharsha24: let's get back to that in the Open Discussion16:08
harsha24okay16:08
gibi#topic Stable Branches16:08
*** openstack changes topic to "Stable Branches (Meeting topic: nova)"16:08
gibilyarwood: do you have any news?16:09
gibiI guess we release from all the stable branches that are not in EM due to the CVE I mentioned above16:10
lyarwoodelod: has been busy cutting releases16:10
gibicool16:10
lyarwoodelod: no other news aside from that really16:10
elodyes, ussuri release is done,16:10
lyarwoodops sorry16:10
elodtrain patch is open16:11
elodstein will come as soon as the patch lands :)16:11
elodnp :)16:11
bauzasI guess we can release ussuri but we need to hold for train, nope ?16:11
* bauzas is lost with all the train queue16:11
elodthe train patch is: https://review.opendev.org/#/c/748383/16:12
lyarwoodhttps://review.opendev.org/#/c/747358/ would be nice to land as we did in ussuri16:12
lyarwoodthat's the change jammed at the moment16:12
elodbauzas: there are some patch on the queue, but do we want to wait?16:12
bauzasI don't know16:12
bauzasask the owner :p16:12
elodok, I'll -W than the Train release patch16:12
lyarwoodelod: ack, I'll reset the +W flags on that change, hopefully that should get it back into the gate16:13
gibiOK16:13
elodlyarwood: not jammed actually16:13
elodjust the parent needs to get merged16:13
elod:]16:13
lyarwoodoh my bad I was sure that had already landed16:14
lyarwoodgerrit-- needs to make these dots bigger!16:14
gibimoving on16:14
gibiLibvirt (bauzas)16:14
gibithere is thing on the agenda16:14
gibi(lyarwood) Looking at bumping MIN_{QEMU,LIBVIRT}_VERSION in V16:14
bauzasnothing to report honestly, I know aarents had changes but I didn't paid attention16:14
gibi#link https://review.opendev.org/#/q/topic:bump-libvirt-qemu-victoria+(status:open+OR+status:merged)16:14
gibiThis would mean using UCA on bionic ahead of our move to focal, is anyone against that?16:14
lyarwoodI forgot to update the agenda16:15
gibilooking at these patches most of them is are in merge conflict16:15
lyarwoodthis looks like it's blocked on the same focal detach device bug16:15
gibilyarwood: ohh, so this was discussed last week16:15
lyarwoodyeah sorry16:15
gibinp16:15
sean-k-mooneygibi: we used to use uca on 16.0416:15
lyarwood#link https://bugs.launchpad.net/nova/+bug/1882521 is the bug reported against focal16:16
openstackLaunchpad bug 1882521 in OpenStack Compute (nova) "Failing device detachments on Focal" [High,New]16:16
lyarwoodI'm seeing the same issue with UCA on bionic but I can't reproduce locally16:16
lyarwoodif we are supposed to be moving to focal anyway this needs to get resolved16:17
gibiyeah16:17
sean-k-mooneyi think we were ment to move at m216:17
sean-k-mooneyor try too so we shoudl swap sonner rather then later16:17
lyarwoodyeah I know gmann was still trying to move16:17
gibiI'm hoping somebody will crack that as I did not have time to look into it16:17
lyarwoodsame, I'm out until Tuesday after this that doesn't help16:18
gmannlyarwood: yeah it is not clear why those fails16:18
gmannas of now those failing tests are skipped to keep doing the testing.16:18
sean-k-mooneylyarwood: could this be due to persitent vs transitant domain difference16:18
gmannbut yes this is one of blocker to move to Focal16:19
lyarwoodsean-k-mooney: maybe, I really can't tell tbh16:19
gibi#action we need a volunteer to look into the Focal bug #link https://bugs.launchpad.net/nova/+bug/188252116:19
openstackLaunchpad bug 1882521 in OpenStack Compute (nova) "Failing device detachments on Focal" [High,New]16:19
sean-k-mooneyfor what its wroth focal and cento8 but use the same version of libvirt16:20
sean-k-mooneyso i would expect this to affact centos8 too16:20
sean-k-mooneyand as an extention rhel16:20
*** e0ne has quit IRC16:20
gibilet's move on now but please continue discussing this bug on #openstack-nova16:21
gibi#topic Stuck Reviews16:21
*** openstack changes topic to "Stuck Reviews (Meeting topic: nova)"16:21
gibinothing on the agenda. Is there anything that is stuck?16:21
*** bnemec has quit IRC16:22
toskyI'd say https://review.opendev.org/#/c/711604/16:22
gmanni think you missed API updates16:22
tosky(or I can mention it when talking about community goals)16:22
gibigmann: ack, sorry, I will get back to that16:23
gibitosky: as I see you need review16:23
gibion that patch16:23
gibitosky: I can look at it tomorrow16:23
bauzaswe should rename this section16:23
bauzasthis is confusing, we're not asking for code waiting to be reviewed16:24
gibibauzas: sure16:24
bauzasbut for patches that are stuck because of conflicting opinions16:24
gibibauzas: I will rename it to reviews with conflicting oppinions16:24
bauzasI suck at naming things16:24
dansmithwhen was the last time we had one?16:24
bauzasso I won't surely propose a thing16:25
dansmithmaybe we could just remove it from the agenda altogether?16:25
gibireal one? on the meeting? pretty long time ago16:25
bauzasthis could work too16:25
bauzasopen discussions are there anyway, so people can argue there16:25
gibi#action gibi to remove stuck review from the agenda16:25
gibimoving on16:25
gibi#topic PTG and Forum planning16:25
*** openstack changes topic to "PTG and Forum planning (Meeting topic: nova)"16:25
gibiThe next Forum and PTG is less than 2 months from now16:26
gibisummary mail #link http://lists.openstack.org/pipermail/openstack-discuss/2020-August/016770.html16:26
gibiplease indicate your acceptable PTG timeslots in #link https://doodle.com/poll/a5pgqh7bypq8piew16:26
gibiplease collect topics in #link https://etherpad.opendev.org/p/nova-wallaby-ptg16:26
gibianything to be discussed about the coming PTG and Forum?16:26
gibi#topic Open discussion16:27
*** openstack changes topic to "Open discussion (Meeting topic: nova)"16:27
gibiharsha24: what can we do for you?16:27
bauzashe's proposing a new design for processing filters that are using threads16:28
bauzaswe had this discussion in the past and we always said filters aren't a performance bottleneck16:28
harsha24i am thinking is there a way to reduce the latency to spawn a vm using mutlithreding concept16:28
bauzasbut,n16:28
bauzasthings changed slightly since we have placement16:28
bauzasharsha24: the point is, I wouldn't argue for more complexity unless you prove me there are huge benefits in doing such parallelism16:29
sean-k-mooneybauzas: well with plamcent you have less need for filters16:29
bauzassean-k-mooney: correct16:29
gibiharsha24: do you have measurement about the latency specific to the filter executions?16:29
dansmithimagine how much worse NoValidHost will be to debug when it depends on the ordering of threads :)16:30
bauzastbc, all filter processing is memory-based16:30
bauzasdansmith: right, that too16:30
bauzasfilters ordering is crucial for most of our ops16:30
*** bnemec has joined #openstack-meeting-316:30
dansmithyep16:30
sean-k-mooneywell all the filters for a given requst could probaly be put in there own tread without that being an issue16:30
artomdansmith, well, if it's done properly we'd join() all the threads *then* print out a summary16:30
dansmithsean-k-mooney: we can do that today with number of workers on the scheduler right?16:31
bauzasI remember johnthetubaguy making a proposal about some smarter filter processing in the past16:31
artomI think the question is - is the complexity and effort worth it?16:31
sean-k-mooneyindvigual filtere on the other hand would be a problem16:31
sean-k-mooneydansmith: ya more or less16:31
bauzasbut that's... yeah, artom said it loudly16:31
sean-k-mooneyharsha24: what was your suggestion16:31
harsha24no actually list of each filter hosts will outcome and make an intersection of filtered hosts at the end16:31
dansmithso each filter runs on the full set of hosts, which means expensive filters cost even more16:32
bauzassean-k-mooney: a classic parallelism approach16:32
harsha24yeah16:32
bauzasdansmith: yup16:32
sean-k-mooneyso the current filter list has a defiend order16:32
bauzashonestly, again, the CPU time is very short16:32
artomThinking out loud, perhaps just doing the external REST API calls async with something like concurrent.futures might be a much bigger speed gain16:32
bauzasand we use generators16:32
sean-k-mooneyif we run them in paralle they need to filter more host and we then need to caulate the intersection16:32
bauzashonestly, again16:32
sean-k-mooneyso that will use more cpu and memory16:32
artomIOW - do other stuff while we wait for Cinder/Neutron to answer. I know it's a completely different thing, just continuing the effort/results thought...16:33
bauzasoperators never saw the filters performance as a bottleneck16:33
dansmithsean-k-mooney: that was my point yeah16:33
bauzasask CERN if you don't know16:33
johnthetubaguydo we know which filters are expensive, last time I looked its the DB queries that dominated16:33
dansmithartom: what filters call out to cinder and neutron?16:33
bauzasjohnthetubaguy: my whole point16:33
dansmithexactly16:33
artomdansmith, not the filters, in the compute manager for example16:33
dansmithartom: this is about filters16:33
artomdansmith, as I said, completely different thing :)16:33
bauzaswe stopping querying the DB in the filters ages ago16:33
dansmithack16:33
aarentssean-k-mooney: bauzas we need less filter with placement AND filters are run only on few hosts with room for allocation (others are filtered before by placement)16:33
sean-k-mooneythe aggrate* fitlers and numa proably are the most expencive16:33
dansmithnuma I'm sure16:34
artomdansmith, I know - it was more the "where can we invest to improve speed" idea16:34
sean-k-mooneyaarents: yes16:34
harsha24what about weighers16:34
bauzasAGAIN, filters don't call DB or neutron16:34
johnthetubaguynuma and affinity were fairly expensive last time I looked16:34
sean-k-mooneyweigher are a spesrate pahse they could be paralised but i think tehy are cheap16:34
dansmithharsha24: even worse.. weigh all hosts even though we're going to exclude most?16:34
bauzasthey just manipulate in-memory python objects to answer a simple question16:35
bauzassean-k-mooney: again, the ordering is crucial16:35
sean-k-mooneydansmith: well we coudl weight the filtered host in parrale16:35
bauzasfor weighers16:35
sean-k-mooneybauzas: not for weighers16:35
sean-k-mooneythat is not an orderd list16:35
sean-k-mooneyor is it i should check that16:35
harsha24I mean after the filtered hosts then can use parallelism for weighers16:35
sean-k-mooneyif its orded then yes16:35
bauzassean-k-mooney: trust me :)16:36
dansmithsean-k-mooney: that seems like a tiny potential for improvement16:36
harsha24filters order is not required as we intesect the set at the end16:36
sean-k-mooneydansmith: correct16:36
bauzasthis seems putting more complexity for no gain16:36
johnthetubaguybauzas: +116:36
sean-k-mooneyharsha24: filter order is currently used to impore perfomance16:36
dansmithsean-k-mooney: there's still overhead in spinning up those threads, joining, and then the sorting is still linear, so I'd super doubt it's worth it :)16:36
johnthetubaguysean-k-mooney: +116:36
sean-k-mooneyyou can change the oder of the filters to elimiate more hosts in the intial filters16:36
bauzasI thought ops were maybe wanting to describe filters and weights another way, but they were okay with a sequential approach16:37
gibiOK so in summary, we don't see filters as a performance bottleneck of instance spawn. Therefore adding the complexity of the parallel exection does not worth it for us.16:37
harsha24Again the no filters shold be less than no of threads available16:37
sean-k-mooneydansmith: yes im not arguign for threads by the way16:37
sean-k-mooneyharsha24: can you provide any data for this16:37
dansmithmaybe this request is actually just someone's master's thesis and not really reality? :P16:37
sean-k-mooneyharsha24: so we can look at a specici case16:37
gibisean-k-mooney: ++16:38
gibiI would like to see data that proves what we could gain16:38
gibibefore moving forward with this16:38
dansmiththe claim is also latency,16:38
dansmithnot throughput16:38
sean-k-mooneyharsha24: we have had ineffinctly inpmeneted filters in the past which have been optimised either at the db layer or in python16:38
bauzaswe're discussing about a finite number of possibilities16:38
artomdansmith, oh be nice :) If harsha24 is looking to improve Nova performance, we should guide them to where we thing the biggest problems are16:38
bauzasin general, we have less than 1000 nodes16:38
bauzasso parallelism doesn't really help16:39
bauzaseven 100000 with say 10 filters is reasonable16:39
sean-k-mooneywell even if we have more placment will have narrowed it16:39
bauzasright16:39
bauzasand again, the whole thing was sucking because of the DB access times16:39
bauzasagain, latency as dansmith said16:39
dansmithalso, you can improve latency by reducing the number of results from placement, if you're claiming latency on an empty cloud where tons of hosts are considered by the filters16:40
bauzasat least that's what a couple of summit sessions from CERN teached me16:40
sean-k-mooneycern reduce it excessively16:40
bauzasaarents: what are your findings btw ?16:40
sean-k-mooneybut yes16:40
dansmithwhich is far less complex than threading any of the rest of the process16:40
bauzas++16:41
johnthetubaguymore request filters to remove filters?16:41
dansmithjohnthetubaguy: that's covered under the placement topic I think, but always better yeah16:41
sean-k-mooneywehre possible16:41
johnthetubaguydansmith: true16:41
bauzasjohnthetubaguy: well, I wouldn't be that opiniated16:42
dansmithmeaning, as we do more of that, fewer real filters are required because placement gives us more accurate results16:42
artomharsha24, are you following any of this, by the way?16:42
bauzasjohnthetubaguy: I'd say some ops would prefer having a fine-grained placement query and others would prefer looping over the filters logic16:42
artomharsha24, was your idea to implement threading, or more generally to try and improve performance?16:42
johnthetubaguydansmith: +116:42
harsha24what if we use this as a plugin when there are few filters like less than 3 filters or so16:42
aarentsbauzas: we leaves with cluster of 1500 nodes on newton(placement free).. it is slow but it "usable"16:42
aarentslive16:42
bauzasaarents: ah, I forgot you lag :p16:43
harsha24artom yeah16:43
sean-k-mooneyharsha24: we had plugins in the past that tried to use caching shcudler and other trick to spead up shcudling16:43
sean-k-mooneyin the long run that appoch ahs not worked out16:44
sean-k-mooneyyou coudl do an experiment but im not sure the scudling is the botelneck in spawning an instance16:44
johnthetubaguywell, caching scheduler became pointless, now placement does it better16:44
harsha24oh okay then its not feasible16:44
gibiOK lets wrap this up.16:44
sean-k-mooneyjohnthetubaguy: ya more or less16:45
harsha24does this method improves performance in weighers16:45
bauzascaching scheduler was there because of the DB bottleneck, not because of the filters :)16:45
gibiso the direct idea of paralelize the filter exection seems not worth it. please provide data to prove the gain.16:45
*** tosky has quit IRC16:45
bauzas+116:45
harsha24I will try a PoC based on this16:45
gibiharsha24: if you are here to improve performance in general, then I think artom had some idea where to gain omre16:45
bauzaswith a devstack env ideally, please16:45
johnthetubaguybauzas: agreed, and the filters, with loads of hosts too basically zero time, when I measured it in prod at a public cloud16:45
johnthetubaguys/too/took/16:46
gibiIs there any other open disucssion topic for today?16:46
artomgibi, it was literally a random thought in my head, but I guess?16:46
gibiartom: :)16:46
artomEnd this! ;)16:46
bauzasartom: I'd dare say good luck with improving I/Os on filters16:47
dansmithwe do async the neutron stuff already, IIRC,16:47
bauzasright16:47
dansmithand the cinder stuff has dependencies that make it hard, IIRC16:47
bauzaswe don't call cinder on a filter either way16:47
bauzasand I'd -2 any proposal like it16:47
dansmithbauzas: artom was randomly picking non-filter things16:47
bauzashah16:47
artom*performance related* random things!16:47
artomI wasn't talking about, like cabbages!16:47
bauzasbut I thought we were discussing about filters parralelism16:48
dansmithbauzas: so this is not filter-related, but it was confusing in a discussion about...filters :)16:48
artomYes, to improve *performance* :)16:48
bauzasright16:48
dansmithbauzas: we were...16:48
sean-k-mooneylet go to #openstack-nova16:48
gibiif nothing else for today then lets close the meeting.16:48
dansmithplease16:48
artomI said "End this!" like 13 lines ago16:48
sean-k-mooneyand we can continue this there if we want16:48
gibithanks for joining16:48
bauzasokay, my tongue is dry16:48
*** yamamoto has joined #openstack-meeting-316:48
sean-k-mooneyo/16:48
gibi#endmeeting16:48
*** openstack changes topic to "OpenStack Meetings || https://wiki.openstack.org/wiki/Meetings/"16:48
openstackMeeting ended Thu Aug 27 16:48:36 2020 UTC.  Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4)16:48
openstackMinutes:        http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-08-27-16.00.html16:48
openstackMinutes (text): http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-08-27-16.00.txt16:48
openstackLog:            http://eavesdrop.openstack.org/meetings/nova/2020/nova.2020-08-27-16.00.log.html16:48
*** sean-k-mooney has left #openstack-meeting-316:48
* bauzas runs as quickly as he can16:49
*** belmoreira has quit IRC16:52
*** yamamoto has quit IRC16:53
*** harsha24 has quit IRC16:55
*** ralonsoh has quit IRC17:12
*** baojg has quit IRC17:25
*** baojg has joined #openstack-meeting-317:26
*** baojg has quit IRC17:45
*** baojg has joined #openstack-meeting-317:46
*** raildo_ has quit IRC17:55
*** raildo_ has joined #openstack-meeting-317:56
*** e0ne has joined #openstack-meeting-318:09
*** rambo_li has quit IRC18:21
*** e0ne has quit IRC18:27
*** tosky has joined #openstack-meeting-318:35
*** belmoreira has joined #openstack-meeting-318:54
*** rambo_li has joined #openstack-meeting-319:02
*** rambo_li has quit IRC19:12
*** rambo_li has joined #openstack-meeting-320:39
*** rambo_li has quit IRC20:57
*** baojg has quit IRC21:04
*** baojg has joined #openstack-meeting-321:05
*** baojg has quit IRC21:05
*** baojg has joined #openstack-meeting-321:06
*** slaweq has quit IRC21:22
*** baojg has quit IRC21:24
*** baojg has joined #openstack-meeting-321:25
*** raildo_ has quit IRC21:28
*** slaweq has joined #openstack-meeting-321:29
*** yamamoto has joined #openstack-meeting-321:30
*** slaweq has quit IRC21:34
*** belmoreira has quit IRC21:56
*** bnemec has quit IRC22:07
*** bnemec has joined #openstack-meeting-322:12
*** yamamoto has quit IRC22:12
*** baojg has quit IRC23:06
*** baojg has joined #openstack-meeting-323:07
*** baojg has quit IRC23:16
*** bnemec has quit IRC23:37
*** bnemec has joined #openstack-meeting-323:43
*** tosky has quit IRC23:59

Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!