16:00:02 <gibi> #startmeeting nova
16:00:03 <opendevmeet> Meeting started Tue Jun  1 16:00:02 2021 UTC and is due to finish in 60 minutes.  The chair is gibi. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:00:05 <opendevmeet> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:00:07 <opendevmeet> The meeting name has been set to 'nova'
16:00:07 <gibi> o/
16:00:08 <bauzas> \o
16:00:11 <elodilles> o/
16:00:14 <gibi> hotseating with neutron :)
16:00:33 <bauzas> yup, I guess the chair is hot
16:00:34 <gibi> or hotswaping
16:00:41 <gmann> o/
16:00:43 <slaweq> :)
16:00:45 <slaweq> hi
16:00:47 <bauzas> maybe we should open the windows ?
16:00:49 <gibi> slaweq: o/
16:00:56 <gibi> bauzas: :D
16:01:04 <slaweq> bauzas: :D
16:01:17 <stephenfin> o/
16:01:31 * bauzas misses the physical meetings :cry:
16:01:37 * gibi joins in
16:01:38 <elodilles> :)
16:01:50 <sean-k-mooney> o/
16:01:52 <bauzas> you could see me sweating
16:01:56 <gibi> this will be a bittersweet meeting
16:02:23 <gibi> lets get rolling
16:02:24 <gibi> #topic Bugs (stuck/critical)
16:02:28 <gibi> no critical bugs
16:02:37 <gibi> #link 9 new untriaged bugs (-0 since the last meeting): #link https://bugs.launchpad.net/nova/+bugs?search=Search&field.status=New
16:02:45 <gibi> I like this stable number under 10
16:02:46 <gibi> :D
16:02:59 <gibi> any specific bug we need to discuss?
16:04:02 <gibi> good
16:04:03 <gibi> #topic Gate status
16:04:08 <gibi> Placement periodic job status #link https://zuul.openstack.org/builds?project=openstack%2Fplacement&pipeline=periodic-weekly
16:04:11 <gibi> super green
16:04:22 <gibi> also nova master gate seems to be OK
16:04:26 <gibi> we merged patches today
16:04:39 <bauzas> \o/
16:04:44 <bauzas> thanks lyarwood I guess
16:05:07 <gibi> thanks everybody who keep this up :)
16:05:17 <gibi> any gate issue we need to talk about?
16:06:06 <sean-k-mooney> im still investingating the one that might be related to os-vif
16:06:16 <sean-k-mooney> but not form me
16:06:44 <gibi> sean-k-mooney: thanks
16:07:21 <gibi> if nothing else for the gate then
16:07:22 <gibi> #topic Release Planning
16:07:27 <gibi> We had Milestone 1 last Thursday
16:07:34 <gibi> M2 is 5 weeks from now
16:07:52 <gibi> at M2 we will hit spec freeze
16:08:08 <gibi> hurry up with specs :)
16:08:15 <gibi> anything else about the release?
16:09:16 <gibi> #topic Stable Branches
16:09:25 <gibi> copying elodilles' notes
16:09:36 <gibi> newer stable branch gates needs investigation why those fail
16:09:40 <gibi> wallaby..ussuri seems to be failing, mainly due to nova-grenade-multinode (?)
16:09:44 <gibi> train..queens seems to be OK
16:09:48 <gibi> pike gate fix is on the way, should be OK whenever it lands ( https://review.opendev.org/c/openstack/devstack/+/792268 )
16:09:51 <gibi> EOM
16:10:30 <gibi> elodilles: on the nova-grenade-multinode failure, is the ceph issue you pushed a DNM patch for?
16:10:50 <elodilles> yes, that's it i think
16:11:04 <gibi> in short we see to new ceph version (pacific) installed on stable
16:11:36 <gibi> s/to/too/
16:12:18 <gibi> anything else about stable?
16:12:25 <elodilles> yes, melwitt's comment pointed that out ( https://review.opendev.org/c/openstack/nova/+/785059/2#message-c31738db1240ddaa629a3aaa4e901c5a62206e85 )
16:12:39 <elodilles> nothing else from me :X
16:13:14 <gibi> #topic Sub/related team Highlights
16:13:18 <gibi> Libvirt (bauzas)
16:13:30 <bauzas> well, nothing worth mentioning
16:13:46 <gibi> thanks
16:13:52 <gibi> #topic Open discussion
16:13:57 <gibi> I have couple of topics
16:14:02 <gibi> (gibi) Follow up on the IRC move
16:14:06 <gibi> so welcome on OFTC
16:14:11 <gmann> +1
16:14:15 <gibi> so far the move seems to be going well
16:14:33 <gibi> I grepped our docs and the nova related wiki pages and fixed them up
16:14:45 <artom> Then again, if someone's trying to reach us over on Freenode, we'll never know, will we?
16:14:49 <gmann> yeah thanks for that
16:14:55 <gibi> artom: I will stay on freenode
16:14:56 <artom> Unless someone's stayed behind to redirect folks?
16:14:59 * sean-k-mooney was not really aware we mentioned irc in the docs before this
16:15:00 <artom> Aha!
16:15:00 <bauzas> we tho  still have less people in the OFTC chan vs. the freenode one
16:15:14 <gmann> yeah I will also stay on Freenode for redirect
16:15:18 <bauzas> gibi: me too, I'll keep the ZNC network open for a while
16:15:22 <gibi> so if any discussion is starting on freenode I will redirect people
16:15:36 <gmann> We are going to discuss in TC Thursday meeting on topic change on freenode or so
16:15:42 <artom> I believe you're not allowed to mention OFTC by name?
16:15:50 <artom> There are apparently bots that hijack channels if you do that?
16:15:51 <gibi> artom: I will use private messages if needed
16:15:53 <bauzas> we currently have 102 attendees on freenode -nova compared to OFTC one
16:15:53 <gmann> artom: we can do as we have OFTC ready and working now
16:16:01 <bauzas> freenode : 102, OFTC : 83
16:16:08 <sean-k-mooney> artom: that was librea and it was based on the topic i think
16:16:19 <bauzas> so I guess not everyone moved already
16:16:31 <gmann> we can give this email ref to know details http://lists.openstack.org/pipermail/openstack-discuss/2021-May/022780.html
16:16:39 <gibi> gmann: good point
16:16:39 <bauzas> artom: indeed, you can't spell the word
16:16:41 <sean-k-mooney> bauzas: well som of those are likely bots
16:16:50 <bauzas> artom: or the channels could be hijacked by some ops
16:17:04 <bauzas> sean-k-mooney: haven't really digged into details
16:17:10 <bauzas> but the numbers were appealing
16:17:27 <bauzas> but that's only a 48h change
16:17:30 <sean-k-mooney> yep most people are now here
16:17:33 <bauzas> we'll see next weeks
16:17:42 <gibi> OK, any other feedback or question around the IRC move?
16:18:23 <gibi> if not then a related topic...
16:18:28 <gibi> (gibi) Do we want to move our meeting from #openstack-meetin-3 to #openstack-nova ?
16:18:34 <gibi> this was a question originally from the TC
16:18:52 <gmann> I think it make sense and also working for many projects like QA, TC afaik
16:18:58 <artom> -1 from me. I've popped in to other channel, not aware that they were having a meeting, and "polluted" their meeting
16:19:11 <gmann> artom: which one?
16:19:21 <sean-k-mooney> artom: i raised that in the tc call
16:19:24 <artom> gmann, #openstack-qa, actually, I think :)
16:19:27 <gmann> I see very less interruption in QA or TC since past 1 year
16:19:29 <sean-k-mooney> but that is really just a habbit thing
16:19:33 <artom> Folks were very polite and everything
16:19:41 <gmann> artom: it might happen very less.
16:19:44 <sean-k-mooney> waitign for the topic to load adn see if the channel is active
16:19:49 <gibi> I can handle interruption politely I guess
16:19:55 <artom> But I felt guilty for interrupting
16:19:57 <gmann> and if anyone come in between we can tell them its meeting time
16:20:01 <artom> But... why?
16:20:07 <artom> What's wrong with a dedicated meeting channel?
16:20:11 <sean-k-mooney> artom: i have had the same feeling yes
16:20:33 <sean-k-mooney> artom: noting really just more infracture to schduing the meetings
16:20:33 <bauzas> I am absolutely +0 to this
16:20:44 <sean-k-mooney> e.g. "booking the room"
16:20:50 <gmann> it is hard to know where the meeting is going on with all openstack-meeting-* channels.
16:20:52 <sean-k-mooney> and seeting up the loging ectra
16:20:59 <bauzas> but sometimes it's nice to have sideways discussions happening on -nova while we continue ranting here
16:21:06 <artom> sean-k-mooney, it's already all set up :)
16:21:11 <gmann> there will be no difference in logging etc
16:21:32 <sean-k-mooney> artom: oh i know
16:21:39 <gibi> bauzas: +1 on the side discussions
16:21:45 <sean-k-mooney> i do like having the side  conversation option
16:21:46 <bauzas> so I guess my only concern would be the ability to have dual conversations happening at the same time without polluting the meeting
16:21:47 <artom> gmann, is it though? Maybe for someone completely new to the community
16:21:53 <sean-k-mooney> that said i try not to do that when i can
16:22:09 <gmann> artom: its for me too when I need to attend many meeting :)
16:22:12 <bauzas> but I guess #openstack-dev could do the job
16:22:31 <artom> gmann, ah, I see your point.
16:22:39 <bauzas> my other concern could be some random folks pinging us straight during the meeting
16:22:44 <artom> Well, #openstack-<project-name>-meetings then?
16:22:46 <sean-k-mooney> i guess it depends i ususally wait for gibi  to ping us :)
16:22:50 <bauzas> but that's not a big deal
16:22:57 <bauzas> (as I usually diverge)
16:22:58 <artom> I still want to keep the dual channel meeting/normal IRC option
16:23:07 <gmann> artom: that will be too many channels
16:23:23 <artom> *shrug* w/e :)
16:23:33 <bauzas> #openstack-dev can fit the purpose of side discussions
16:23:37 <sean-k-mooney> dansmith: you are pretty quite on the topic
16:23:37 * artom joins bauzas in the +0 camp
16:23:52 <dansmith> sean-k-mooney: still have a conflict in this slot, will have to read later
16:23:53 <gmann> dansmith: is in another call i think
16:24:06 <bauzas> I just express open thoughts and I'm okay with workarounds if needed
16:24:13 <bauzas> hence the +0
16:24:20 <bauzas> nothing critical to me to hold
16:24:21 <sean-k-mooney> dansmith: ah just asking if you prefered #openstck-nova vs #openstack-meeting-3
16:24:22 <gibi> OK, lets table this for next week then. So far I don't see too many people wanting to move
16:24:24 <sean-k-mooney> dansmith: no worries
16:24:44 <gmann> I will say let's try and if it does not work we can come back here
16:24:59 <gmann> +1 on keeping it open for discussion
16:25:31 <gibi> we will come back to this next week
16:25:40 <gibi> next
16:25:42 <gibi> (gibi) Monthly extra meeting slot for the Asia + EU. Doodle #link https://doodle.com/poll/svrnmrtn6nnknzqp . It seems Wednesday 8:00 or Thursday 8:00 is winning.
16:25:51 <gibi> 8:00 UTC  I mean
16:25:51 <dansmith> sean-k-mooney: very much -nova
16:27:01 <sean-k-mooney> gibi: does that work for you to chair the meeting at that time
16:27:04 <bauzas> dansmith: I just expressed some concern about the ability to have side discussions, they could happen "elsewhere" tho
16:27:06 <gibi> If no objection then I will schedule that to Thursday 8:00 UTC and I will do that on #openstack-nova (so we can try the feeling)
16:27:12 <gibi> sean-k-mooney: yes
16:27:17 <gibi> sean-k-mooney: I can chair
16:27:23 <sean-k-mooney> cool
16:27:34 <bauzas> this works for me too
16:27:45 <bauzas> 10am isn't exactly early in the morning
16:28:08 * sean-k-mooney wakes up at 10:30 most mornings
16:28:09 <gibi> there was a lot of participation in the doodle
16:28:24 <sean-k-mooney> yes
16:28:29 <gibi> so I hope for similar crowd on the meeting too
16:28:32 <bauzas> indeed
16:28:36 <sean-k-mooney> form a number of names we dont see often in this neeting
16:28:55 <bauzas> yup, even from cyborg team
16:29:17 <gibi> I will schedule the next meeting for this Thursday
16:29:29 <gibi> so we can have a rule of every first Thurday of a month
16:29:30 <bauzas> (I mentioned this because they expressed their likeliness for a Asian-friendly timeslot)
16:30:10 <bauzas> gibi: this works for me, and this would mean the first meeting being held in two days
16:30:19 <gibi> yepp
16:30:44 <gibi> no more topic on the agenda for today. Is there anything else you would like to discuss today
16:30:47 <gibi> ?
16:31:33 <bauzas> do we need some kind of formal agenda for thursday meeting ?
16:31:48 <bauzas> or would we stick with a free open hour
16:32:03 <bauzas> ?
16:32:15 <gibi> I will ask the people about it on Thursday
16:32:23 <gibi> I can do both
16:32:42 <gibi> or just summarizing anyting from Tuesday
16:32:43 <artom> Oh, can I ask a specless blueprint vs spec question?
16:32:49 <gibi> artom: sure, go ahead
16:33:07 <artom> So, we talked about https://review.opendev.org/c/openstack/nova-specs/+/791287 a few meetings ago
16:33:17 <artom> WIP: Rabbit exchange name: normalize case
16:33:38 <artom> sean-k-mooney came up with a better idea that solves the same problem, except without the messy upgrade impact:
16:33:50 <artom> Just refuse to start nova-compute if we detect the hostname has changed
16:34:11 <artom> So I want to abandon https://review.opendev.org/c/openstack/nova-specs/+/791287 and replace it with sean-k-mooney's approach
16:34:20 <artom> Which I believe doesn't need a spec, maybe not even a blueprint
16:34:45 <sean-k-mooney> you mean treat it as a bugfix if its not a blueprint
16:35:12 <gibi> artom: the hostname reported by libvirt change compared to the hostname stored in DB?
16:35:18 <artom> gibi, yes
16:35:22 <artom> sean-k-mooney, basically
16:36:11 <bauzas> tbc, we don't mention the service name
16:36:13 <gibi> artom: could there be deployments out there that are working today but will stop working after your cahnge?
16:36:21 <bauzas> but the hypervisor hostname which is reported by libvirt
16:36:30 <bauzas> because you have a tuple
16:36:35 <artom> gibi, we could wrap it in a config option, sorta like stephenfin did for NUMA live migration
16:36:39 <bauzas> (host, hypervisor_hostame)
16:36:47 <sean-k-mooney> gibi: tl;dr in the virt diver we would lookup the compute service record using CONF.host and in the libvirt driver check that 1 the compute nodes assocated with the compute service record is lenght 1 and 2 that its hypervior_hostnaem is the same as the one we currenmtly have
16:37:05 <gibi> sean-k-mooney: thanks
16:37:12 <bauzas> eg. with ironic, you have a single nova-compute service (then, a single hostname) but multiple nodes, each of them being an ironic node UUID
16:37:40 <bauzas> sean-k-mooney: what puzzles me is that I thought service RPC names were absolutely unrelated to hypervisor names
16:37:46 <artom> bauzas, it would be for drivers that have a 1:1 host:node relationship
16:37:58 <artom> bauzas, but it's a good point, we'd have to make it drive-agnostic as much as possible
16:37:59 <bauzas> and CONF.host is the RPC name, hence the service name
16:38:11 <bauzas> artom: that's my concern, I guess
16:38:37 <bauzas> some ops wanna define some RPC name that's not exactly what the driver reports and we said for a while "you'll be fine"
16:38:40 <artom> bauzas, valid concern, though I think it'd be pointless to talk about it in a spec, without the code to look at
16:39:23 <bauzas> artom: tbh, I'm even not sure that other drivers but libvirt use the libvirt name as the service name
16:39:48 <bauzas> so we need to be... cautious, I'd say
16:39:51 <gibi> I thought we use CONF.host as service name
16:39:54 <sean-k-mooney> bauzas: we cant do that wihtou breaking people
16:39:58 <bauzas> gibi: right
16:39:58 <sean-k-mooney> gibi: we do
16:40:03 <gibi> OK
16:40:17 <gibi> so the service name is hypervisor agnostic
16:40:19 <bauzas> gibi: but we use what the virt driver reports for the hypervisor_hostname field of the ComputeNode record
16:40:32 <gibi> the node name is hypervisor specific
16:40:33 <bauzas> gibi: this is correct again
16:40:34 <sean-k-mooney> and in theory we use conf.host for the name we put in instance.host
16:41:00 <gibi> and RPC name is also the service name so that is also hypervisor agnostic
16:41:06 <bauzas> yup
16:41:14 <gibi> so if we need to fix the RPC name the we could do it hypervisor agonsticly
16:41:17 <gibi> (I guess)
16:41:31 <bauzas> but here, artom proposes to rely on the discrepancy to make it hardstoppable
16:41:46 <sean-k-mooney> i think there are two issues 1 changing conf.host and two the hypervior_hostname changing
16:41:55 <gibi> what if we detect the discrepancy via comparing db host with nova-compute conf.host?
16:42:06 <sean-k-mooney> ideally we would liek both to not change and detetc/block both
16:42:28 <bauzas> well, if you change the service name, then it will create a new service record
16:42:30 <gibi> hm, we cannot look up our old DB record if conf.host is changed :/
16:42:35 <sean-k-mooney> gibi: we would have to do that backwards
16:42:44 <sean-k-mooney> lookup compute node rp by hypervior_hostname
16:42:51 <bauzas> the old service will be seen as dead
16:42:51 <sean-k-mooney> then check compute service recrod
16:43:14 <gibi> sean-k-mooney: so we detect if conf.host changes but hypervisor_hostname remains the same
16:43:36 <sean-k-mooney> gibi: yep we can check if either of the values changes
16:43:41 <sean-k-mooney> but not if both of the values change
16:43:50 <bauzas> what problem are we trying to solve ?
16:44:01 <bauzas> the fact that messages go lost ?
16:44:02 <gibi> sean-k-mooney: OK thanks, now I see it
16:44:03 <sean-k-mooney> unless we just write this in a file on disk that we read
16:44:38 <artom> bauzas, people renaming their compute hosts and exploding stuff
16:44:55 <artom> Either on purpose, or accidentally
16:45:01 <sean-k-mooney> bauzas: the fact that is possible to start the compute service when either conf.host has change or hypervisor_hostname has changed and get in an inconsitent state
16:45:24 <sean-k-mooney> bauzas: one of those bing that we can have instance on the same host with different valuse of instace.host
16:45:32 <gibi> I guess if both changed then we get a new working compute and the old will be orphaned
16:45:46 <sean-k-mooney> gibi: yep
16:45:49 <gibi> OK
16:45:59 <stephenfin> assuming we can solve this, I don't see this as any different to gibi's change to disallow N and N-M (M > 1) in the same deployment
16:46:03 <bauzas> sean-k-mooney: can we just consider to NOT accept to rename the service hostname if instances are existing on it ?
16:46:18 <stephenfin> in terms of being a hard break but only for people that already in a broken state
16:46:26 <sean-k-mooney> bauzas: that might also be an option yes
16:46:30 <stephenfin> ditto for the NUMA live migration thing, as artom alluded to above
16:46:59 <sean-k-mooney> bauzas: i think we have options and it might be good to POC some of them
16:47:13 <bauzas> again, I'm a bit conservative here
16:47:18 <gibi> I think I convinced that this can be done for libvirt driver. As of how to do it for the other drivers it remains to be seen
16:47:50 <bauzas> I get your point but I think we need to be extra cautious, especially with some fancy scenarios involving ironic
16:48:07 <bauzas> gibi: that's a service issue, we need to stay driver agnostic
16:48:15 <sean-k-mooney> bauzas: ack although what i was orignly suggestin was driver speific
16:48:23 <artom> sean-k-mooney, right, true
16:48:29 <bauzas> sean-k-mooney: hence my point about hypervisor_hostname
16:48:44 <sean-k-mooney> yes it not that i did not consider ironic
16:48:44 <artom> It would be entirely within the libvirt driver in init_host() or something, though I suspect we'd have to add new method arguments
16:48:47 <bauzas> I thought you were mentioning it as this is the only fied being virt specific
16:48:51 <sean-k-mooney> i intentionally was declaring it out of scope
16:48:58 <sean-k-mooney> and just fixinbg the issue for libvirt
16:49:11 <sean-k-mooney> bauzas: i dont think this can happen with ironic for what its worth
16:49:13 <bauzas> artom: init_host() is very late in the boot proicess
16:49:23 <artom> bauzas, well pre_init_host() then :P
16:49:29 <bauzas> you already have a service record
16:49:39 <bauzas> artom: pre init_host, you are driver agnostic
16:49:39 <gibi> bauzas: we need a libvirt connection though
16:49:45 <bauzas> gibi: I know
16:49:48 <gibi> bauzas: so I'm not sure we can do it earlier
16:49:52 <bauzas> gibi: my point
16:49:58 <sean-k-mooney> bauzas: we need to do it after we retrive teh compute service recored but before we create a new one
16:50:18 <bauzas> iirc, we create the service record *before* we initialize the libvirt connection
16:50:46 <bauzas> this wouldn't be idempotent
16:50:50 <sean-k-mooney> perhaps we should bring this to #openstack-nova
16:51:22 <bauzas> I honestly feel this is tricky enough for drafting it somewhere... unfortunately like in a spec
16:51:27 <artom> FWIW we call init_host() before we update the service ref...
16:51:46 <bauzas> especially if we need to be virt-agnostic
16:51:50 <artom> And before we create the RPC server
16:51:58 <bauzas> artom: ack, then I was wrong
16:52:15 <artom> 'ts what I'm saying, we need the code :)
16:52:22 <artom> In a spec it's all high up and abstract
16:52:29 <bauzas> I thought we did it in pre_hook or something
16:52:36 <sean-k-mooney> no we can talk about this in a spec if we need too
16:52:38 <bauzas> poc, then
16:52:43 <sean-k-mooney> specs dont hae to be high level
16:52:46 <bauzas> poc, poc, poc
16:52:52 <sean-k-mooney> ok
16:53:05 <gibi> OK, lets put up some patches
16:53:09 <gibi> discuss it there
16:53:09 <artom> 🐔 poc poc poc it is then 🐔🐔
16:53:23 <gibi> and see if this can fly
16:53:30 <artom> Chickens can't fly
16:53:33 <gibi> I'm OK to keep this without a spec so far
16:53:33 <sean-k-mooney> if we have a few mints i have one ther simialr topic
16:53:47 <sean-k-mooney> https://bugzilla.redhat.com/show_bug.cgi?id=1700390
16:53:48 <opendevmeet> bugzilla.redhat.com bug 1700390 in openstack-nova "KVM-RT guest with 10 vCPUs hangs on reboot" [High,Closed: notabug] - Assigned to nova-maint
16:53:55 <gibi> sean-k-mooney: sure
16:54:01 <sean-k-mooney> we have ^ downstream
16:54:25 <sean-k-mooney> baically when using realtime you shoudl alway use hw:emulator_tread_polcy=something
16:54:46 <sean-k-mooney> but we dont disally it because while its a bad idea not to it can work
16:55:16 <sean-k-mooney> im debating between filing a whish list bug vs a specless blueprint for a small change in our defaultl logic
16:55:26 <gibi> feels like a bug to me we can fix in nova. even if it works some times we can disallow it
16:55:39 <sean-k-mooney> if i rememeber correct we stil require at least 1 core to not be realtime
16:55:54 <sean-k-mooney> so i was tinking we could limit the emulator tread to that core
16:56:04 <gibi> sounds like a plan
16:56:19 <gibi> if we don't have such a limit then I think we should add that as weel
16:56:22 <gibi> well
16:56:36 <gibi> it does not make much sense not to have a o&m cpu
16:56:36 <sean-k-mooney> we used too
16:56:39 <sean-k-mooney> i dont think we removed it
16:56:56 <sean-k-mooney> would people be ok with this as a bugfix
16:57:09 <sean-k-mooney> i was conserded it might be slitghly featureish
16:57:22 <gibi> yes. It removes the possibility of a known bad setup
16:58:01 <sean-k-mooney> ok ill file and transcribe the relevent bit from the downstream bug so
16:58:25 <sean-k-mooney> the workaround is just use hw:emulator_tread_polcy=share|isolate
16:58:52 <sean-k-mooney> but it would be nice to not have the bugging config by default
16:58:59 <gibi> I agree
16:59:05 <gibi> and others seems to be silent :)
16:59:09 <gibi> so it is sold
16:59:17 <gibi> any last words before we hit the top of the hour?
17:00:04 <gibi> then thanks for joining today
17:00:08 <gibi> o/
17:00:12 <gibi> #endmeeting