16:29:37 <sdake> #startmeeting kolla
16:29:38 <openstack> Meeting started Wed Jan 27 16:29:37 2016 UTC and is due to finish in 60 minutes.  The chair is sdake. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:29:39 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:29:41 <openstack> The meeting name has been set to 'kolla'
16:29:54 <sdake> #topic rollcall
16:29:55 <britthouser> 0/
16:29:58 <mkoderer> o/
16:29:59 <rhallisey> hi
16:30:00 <inc0> o/
16:30:02 <sdake> \o/
16:30:03 <Jeffrey4l> \0/
16:30:15 <unicell> o/
16:30:24 <pbourke> o/
16:30:35 <nihilifer> o/
16:31:02 <akwasnie> hi
16:31:20 <sdake> #topic announcements
16:31:24 <ajafo> o/
16:31:35 <sdake> #1. midcycle is february 9th and 10th.  if you haven't rsvped, please do so ASAP
16:31:54 <sdake> breakfast, lunch dinner provided the 9tih, breakfast lunch prvided the 10th, soda and coffee both days
16:31:58 <sdake> so food is taken care of :)
16:32:04 <britthouser> thanks!
16:32:20 <sdake> #2. small token of appreciation from foundation
16:32:23 <jpeeler> hi
16:32:51 <sdake> for all current and post core reviewers of Kolla (including harmw and daneyon) the foundation and HP have a small token of appreciation for your dutiy as a core reviewer
16:33:03 <sdake> if your a core reviewer, please send me your mailing address and i'll send it out
16:33:18 <pbourke> :)
16:33:44 <elemoine> o/
16:33:48 <sdake> the smaall token of appreciation is not these: https://www.audeze.com/products/lcd-collection/lcd-4
16:33:50 <sdake> but you will like it anyway
16:33:58 <sdake> even ify our in a different country i will sort out how to get it to you
16:34:04 <sdake> say thanks to MarkAtwood
16:34:38 <nihilifer> thank you MarkAtwood ;)
16:34:41 <sdake> if you donif you dont want to share your home address, i will bring some to the midcycle if i havet hem in time
16:34:47 <sdake> if not i will bring some to hte summit
16:35:20 <sdake> any announcements from the community?
16:35:49 <pbourke> none here
16:36:16 <Jeffrey4l> no
16:37:03 <sdake_> one last thing, i am currently remodeling my house
16:37:05 <sdake_> i am nearly done, 1-2 weeks out
16:37:27 <sdake_> but i am a bit distracted by the contractors, so if i'm ailing you in any way let me know and i'll fix that up
16:37:34 <sdake_> i am also tkaing pto monday/tuesday pretty much unavailable all day
16:37:37 <sdake_> possiby wednesday as well
16:37:48 <sdake_> i'll syncup with rhallisey if he needs to run the wed meeting
16:37:57 <sdake_> but i can probably take a break for that
16:38:10 <rhallisey> sdake, kk
16:38:12 <sdake_> #topic liberty 3 milestone review
16:38:17 <inc0> next meeting will be almost midcycle anyway
16:38:17 <sdake_> annoying
16:38:28 <Jeffrey4l> liberty? mitaka?
16:38:30 <sdake_> ok well i can't chane tpics because my nick is different
16:38:39 <sdake_> Jeffrey4l huh?
16:38:45 <sdake_> rather mitaka 3 milestoen review
16:38:46 <rhallisey> lols
16:38:49 <sdake_> sorry brain fart there :)
16:38:51 <sdake_> old and tired
16:38:53 <kproskurin> :-D
16:38:54 <Jeffrey4l> :D
16:39:36 <sdake_> #link https://launchpad.net/kolla/+milestone/mitaka-3
16:39:39 <sdake_> please open that up
16:39:42 <sdake_> time boxed to 50 after
16:40:05 <sdake_> as you can see my begging and pleading with the core review team to tackle upgrades has partially worked
16:40:09 <sdake_> but I need m0ar help
16:40:27 <sdake_> if folks could take 1 regular service and 1 infrastructure service that would rock
16:40:40 <sdake_> and we will come out of the midcycle with a plan for upgrading infrastructure services
16:41:12 <sdake> #topic mitaka-3 review
16:41:29 <Jeffrey4l> Upgrade Manila from Mitaka to Newton -> why we have this? Wrong description?
16:42:01 <sdake> jeff manilla only came about in mitaka
16:42:11 <sdake> but we need to make sure database migrations work properly
16:42:17 <mkoderer> "upgrade" means everything releated to a db schema update, right?
16:42:22 <sdake> right
16:42:36 <sdake> as well as applying configurtion changes
16:42:43 <mkoderer> sdake: k
16:42:45 <sdake> the applying configuration changes i htink happens uatomatically
16:42:48 <sdake> but inc0 woud know for sure
16:43:05 <inc0> it does
16:43:09 <sdake> inc0 has got us kicked off with two kick-ass implemtntions - one simpe keystone nd one very ard nova
16:43:23 <inc0> it overrides old confs with new
16:43:38 <sdake> ya we will have deploy and upgrade
16:43:40 <sdake> no update
16:43:41 <Jeffrey4l> But I am curiosity, puth the manilla here (midcycle 3) is properly?
16:44:00 <sdake> manila needs to be able to handle a dtabase migration update
16:44:12 <mkoderer> Jeffrey4l: where do we have manila? midcycle ehterpad?
16:44:20 <sdake> i dont even know if manila works, haven't tried it
16:44:32 <mkoderer> sdake: the container builds
16:44:36 <sdake> also root drop needs some love
16:44:40 <mkoderer> ansible isn't ready yet
16:44:46 <sdake> we hav edone about 25 of the todos
16:44:48 <sdake> but 5 remain
16:44:57 <Jeffrey4l> sdake,  https://blueprints.launchpad.net/kolla/+spec/upgrade-manila  here, the target is m3
16:44:59 <sdake> if some folks could pick up 1 or 2, i'd like to finish tht out before mitaka
16:45:10 <sdake> Jeffrey4l yes that is correct
16:45:21 <sdake> m3 is march 4th
16:45:27 <sdake> i expect manila will be functional prior to then
16:45:40 <mkoderer> without manila support upgrading makes no sense :)
16:45:42 <sdake> anyone have any blueprints they wish to bring attention now that my begging is done :)
16:45:50 <sdake> manila is implemented
16:45:58 <sdake> playbooks i think are in review or impemented
16:46:11 <Jeffrey4l> But the description is "Upgrade Manila from Mitaka to Newton", there is no Newton version before m3
16:46:12 <mkoderer> sdake: https://review.openstack.org/#/c/269688/
16:46:12 <sdake> manila less priority then others
16:46:20 <unicell> quick question, the "upgrade" is it for upgrade between kolla releases? how about source build? since it always uses master
16:46:24 <mkoderer> it's a wip
16:46:36 <sdake> unicell tht is what its for  as well
16:46:39 <sdake> upgrde liberty to master
16:46:43 <sdake> ugprade mitaka to miaster
16:46:47 <sdake> upgrade master a to master b
16:46:59 <unicell> mainly db schema?
16:47:05 <inc0> unicell, in reality it is upgrade from latest stable kolla to master
16:47:08 <sdake> we have determined the hrd part of the upgrde is the db schema
16:47:19 <inc0> but it will work on any kolla you have to master
16:47:23 <sdake> inc= goood point
16:47:28 <inc0> we just support latest stable to current master
16:47:51 <unicell> I see
16:47:53 <sdake> 2 more mins on time box
16:47:54 <inc0> you can run upgrades every hour and it will work
16:48:03 <sdake> samyaple are you about?
16:48:07 <inc0> I don't reccomend it, but it will;)
16:48:15 <mkoderer> inc0: old stable to new stable would make sense too
16:48:17 <unicell> inc0: that means every commit in kolla master should be backward compatible?
16:48:18 <sdake> inc0 assuming master works ;)
16:48:34 <inc0> mkoderer, new stable is a master from some point of time;)
16:48:51 <liyi> do we also have rollback in agenda?
16:49:03 <unicell> inc0: I'm just thinking about your `run upgrades every hour` statement
16:49:08 <sdake> one area of concern i have is the data containers to hte named volumes
16:49:14 <sdake> we need to discuss tht at hte midcycle
16:49:35 <inc0> unicell, depends how you define backward compatibility, we don't have running service or api
16:49:52 <inc0> each run of plays is idempotent and atomic action
16:49:57 <sdake> liyl rollback si not in the meeting agenda no
16:50:09 <inc0> and we don't have database
16:50:09 <sdake> samyaple ping
16:50:11 <sdake> your up next
16:50:27 <liyi> thanks sdake
16:50:40 <unicell> inc0: assuming everytime we do fresh deploy, then yes. but what if I keep some containers running, while updating others
16:50:45 <sdake> #topic Kolla-ansible config playbook
16:50:45 <elemoine> sdake, your data containers vs named volumes concern is related to upgrades?
16:50:57 <sdake> elemoine yes
16:51:03 <inc0> unicell, let's move this to open discussion plz
16:51:08 <unicell> inc0: sure
16:51:45 <sdake> well this is samyaple's topic - if  he wakes up we can move back to him :)
16:51:57 <nihilifer> SamYaple: ?
16:51:59 <pbourke> Im not understanding that topic
16:52:03 <sdake> #topic Logging with Heka spec
16:52:09 <sdake> pbourke its in the agenda
16:52:13 <sdake> i dont know the details sorry
16:52:28 <pbourke> np
16:52:33 <sdake> I follow the agenda here:
16:52:36 <sdake> #link https://wiki.openstack.org/wiki/Meetings/Kolla
16:52:43 <sdake> feel free to update it with your items
16:52:49 <elemoine> should I go ahead and talk about Heka spec?
16:52:59 <sdake> you have the floor elemoine
16:53:09 <elemoine> I updated the spec today, based on the good commeents I had
16:53:22 <sdake> see spec process hepful elemoine :)
16:53:46 <elemoine> the one missing bit is verify that I can handle haproxy and keepalived using Heka only
16:53:55 <elemoine> I am prototyping this right now
16:54:02 <elemoine> and it's promising :)
16:54:14 <elemoine> I got it to work for haproxy
16:54:20 <sdake> nice!
16:54:29 <sdake> i think now we have no haproxy logging at all
16:54:34 <elemoine> so I can't promise it but it sounds like we'll be able to do with just Heka
16:54:43 <elemoine> sdake, correct
16:54:50 <sdake> akwasnie have any feedback?
16:54:53 <elemoine> but we want haproxy logging in the future
16:54:53 <nihilifer> gj elemoine
16:55:32 <britthouser> Remind me: we decided heka can replace both rsyslog and one of the other services?
16:55:32 <akwasnie> just one question elemoine: are you able to connect your patch with Heka with my patch with Elastic?
16:55:35 <elemoine> when I am done with the prototyping work I'll update the spec again
16:55:42 <sdake> britt roger
16:55:52 <akwasnie> I mean will it work together?
16:55:59 <elemoine> akwasnie, I have no doubt about this
16:55:59 <inc0> britthouser, if it can that is
16:56:05 <elemoine> this is why I did not focus on that
16:56:07 <sdake> britthouser logstash is gone now with heka
16:56:19 <britthouser> nice...two birds, one heka. =)  GJ  elemoine!
16:56:35 <sdake> my only concern with heka is reliability
16:56:39 <nihilifer> akwasnie: your elk patches will be merged soon, so elemoine will begin "ready to mege" work after that
16:56:42 <sdake> and my lack of experience with said reliability
16:57:05 <sdake> but if  folks say its solid i'll go with that
16:57:09 <akwasnie> nihilifer: great
16:57:13 <elemoine> sdake, yeah new technology for you
16:57:28 <elemoine> but we've been using Heka for quite some time
16:57:37 <elemoine> and we're confident it's reliable
16:57:46 <sdake> wfm
16:57:58 <elemoine> and Mozilla use it in production as well
16:58:13 <inc0> and we don't have anything really now
16:58:17 <elemoine> and we're committed to babysitting it :)
16:58:37 <sdake> elemoine can you link the spec please
16:58:48 <inc0> we all will babysit it, but your help will be super useful
16:59:01 <elemoine> https://review.openstack.org/#/c/270906/
16:59:08 <sdake> core reviewers, specs require simple maoiryt approval
16:59:13 <elemoine> inc0, right, I was joking here
16:59:14 <sdake> so I'd appreciate your time in reviewing the specs
16:59:20 <sdake> so  i don't have to track down votes
16:59:34 <sdake> #link https://review.openstack.org/#/c/270906/
16:59:35 <elemoine> sdake, I'll udpate the spec again tomorrow
16:59:49 <elemoine> when I'm done I'll remove the [WIP] flag in the commit message
17:00:06 <elemoine> when WIP is gone it'll mean "please review"
17:00:17 <elemoine> works for you?
17:00:27 <sdake> yup
17:00:30 <inc0> you can always use workflow-1 for that
17:00:31 <sdake> works for everyone :)
17:00:35 <inc0> but either way works
17:00:40 <sdake> might as well review along the way
17:00:53 <elemoine> ok
17:00:57 <sdake> i dont like specs to be reviewed at last minute
17:01:07 <sdake> i'd like them continously reviewed by the core reviewer team
17:01:19 <sdake> this is why we dont use specs - alot of overhead
17:01:21 <elemoine> that's all on my side, I'm looking forward to integrating this with akwasnie's EK work
17:01:39 <sdake> #topic open discussion
17:01:46 <sdake> so data containers to named volumes
17:01:47 <inc0> having central logging this release would be wonderful
17:01:58 <sdake> inc0 me first then you :)
17:01:59 <unicell> so we skipped "Kolla-ansible delegate_to vs run_once" topic as well?
17:02:03 <elemoine> inc0, yep, agreee
17:02:11 <sdake> unicell samyaple isn't here
17:02:22 <sdake> and I dont know what he wants to discuss in that topic
17:02:45 <liyi> how about the other guys get involved with that topic
17:02:48 <unicell> sdake: ok..  I kind of hitting the issue with delegate_to vs run_once
17:02:49 <sdake> if the agenda topic leads aren't here in the meeting we will skip those
17:03:04 <unicell> interested to know what the direction would be to solve the problem
17:03:17 <sdake> unicell sync up with sam when he is in
17:03:19 <liyi> the bug r really a pain in the ass now
17:03:29 <unicell> guess we can discuss when Sam is online   yep
17:03:57 <liyi> ok
17:03:58 <sdake> so data containers to named volumes
17:04:12 <sdake> i was thinking we could make a backup migration playbook
17:04:14 <sdake> that is run once
17:04:27 <sdake> that migrates data containers to named volumes
17:04:36 <sdake> thoughts/concerns?
17:04:38 <inc0> one thin I have to say on this topic, we're switch docker version with this, kolla will not have backward compatibility
17:04:41 <sdake> (needs to finish march)
17:04:56 <unicell> sdake: is that part covered in any of the upgrade blueprints?
17:04:57 <sdake> inc0 why not
17:04:58 <inc0> and yeah, migration play will have to happen and be part of upgrade play, good point
17:05:17 <nihilifer> sdake: that would be probably ansible module as well, but idea is good for me
17:05:18 <inc0> sdake, just something to consider
17:05:21 <inc0> I'm ok with that
17:05:41 <Jeffrey4l> I think the backup is neccessary.
17:05:41 <sdake> no i mean when we switch to tdocker 1.10, why is there no backward compat?
17:05:52 <inc0> sdake, can you pop a bp and connect it to main upgrade bp?
17:06:01 <inc0> named volumes
17:06:10 <inc0> thin containers
17:06:11 <sdake> yes well we need to solve that problem too
17:06:30 <sdake> inc0 can you write the blueprint, it hink you have a betteer handle o n the problem
17:06:34 <inc0> if we switch to docker 1.10 and start using it's stuff right away, which we should
17:06:40 <inc0> ok
17:07:02 <inc0> we stop supporting older version of docker
17:07:16 <sdake> i am good with 1.10 of docker
17:07:27 <sdake> maybe docker inc hs finally gotten docker right this time ;)
17:07:32 <sdake> seems like it from my registry experiments
17:07:34 <inc0> ma too, but we might need to put this as well to upgrade play
17:07:48 <elemoine> so for upgrade we'll need something that copies the data from the old data container to the new named volume, right?
17:08:07 <inc0> elemoine, maybe there is clean way to turn container to volume..
17:08:11 <sdake> yes but it ony needst orun if the data container is up
17:08:26 <sdake> and if it is up we backup restore kill old data container
17:08:41 <sdake> left as an exercise to the reader
17:08:48 <sdake> named volumes are already in teh code base
17:08:52 <sdake> so it needs to be handled
17:09:11 <elemoine> file owners/permissions might be the hard part
17:09:27 <inc0> I'll pop bp for both - upgrade docker and migration between version
17:09:27 <sdake> i want popele to be able to deploy liberty and then upgrade to mitaka via kola with zero problems
17:09:29 <sdake> downtime is fine
17:09:46 <inc0> downtime of API is fine*
17:09:58 <sdake> not the vms of course
17:11:14 <Jeffrey4l> upgrade the docker will stop all the containers( especially the openvswitch and nova-libvit ) on a host. So we should move the vms on the host to another in the play?
17:11:38 <sdake> nah libvirt will reonnect
17:11:42 <sdake> i have tested this extensively
17:11:51 <sdake> libvirt can be upgraded without vm downtime
17:12:27 <inc0> qemu can't tho
17:12:37 <inc0> not sure about ovs
17:12:39 <Jeffrey4l> qemu process and br-ex from openvswitch will be kill. at that time, the vms is unreachable.
17:12:52 <sdake> a short network downtime is ok
17:13:01 <sdake> losing vm state is not good
17:13:15 <inc0> I'd rather keep us of upgrading ovs right now
17:13:18 <sdake> qemu-kvm process is not killed
17:13:38 <inc0> for me ovs upgrade is something we should handle outside of our play
17:13:47 <sdake> inc0 your the boss ;)
17:14:07 <sdake> inc0 has taken lead on this folks, i'm just here to facilitate
17:14:11 <inc0> as it's not upgraded every 6 months and is way more disruptive
17:14:11 <sdake> inc0 knows the tech well
17:14:21 <sdake> its a one time upgrade
17:14:30 <sdake> oh you mean ovs
17:14:32 <sdake> right
17:14:36 <unicell> quick question: do we have a working sdn solution integraion in kolla, or it's just ovs with bridged network?
17:14:37 <sdake> ovs is a infrastructure service
17:14:40 <sdake> not an openstack service
17:14:50 <sdake> we need a different strategy for infrastructure services then openstack services
17:14:59 <inc0> +1 sdake
17:15:02 <Jeffrey4l> I am talking about the docker service upgrade. When the docker service is stopped and all the container is down. The qemu process should be killed, in my understanding.
17:15:12 <inc0> unicell, it's normal ovs+neutron way
17:15:26 <inc0> we also support linuxbridge
17:15:27 <sdake> that is what i talked about above, investigate the ifnrastructure services and come to the midcycle prepapred with a plan
17:15:48 <sdake> Jeffrey4l 100% trust me qemu is not killed
17:15:49 <inc0> good point Jeffrey4l
17:15:57 <sdake> pid=host
17:15:57 <elemoine> sdake, infrastructure upgrade is as important as openstack upgrade, no?
17:16:08 <sdake> elemoine second in importance
17:16:20 <sdake> with pid=host, anything that goes into the host pid space docker cannot kill
17:16:24 <inc0> elemoine, it doesn't have to be upgraded every 6 months, and that's easier
17:16:39 <elemoine> inc0, sdake, ok I see
17:16:41 <Jeffrey4l> sdake, ok. I will make that test later.
17:16:53 <inc0> Jeffrey4l, please do
17:16:58 <sdake> Jeffrey4l yup double check that is still the case since docker is a movign targrt
17:17:07 <Jeffrey4l> :D
17:17:10 <sdake> but fwiw 1.8 and below had this behavior
17:17:44 <sdake> if it does kill the ms we ned to fix that
17:18:16 <inc0> unicell, about fresh deploy vs running kolla
17:18:24 <Jeffrey4l> yup. I will show the result.
17:18:32 <inc0> our upgrade stategy really means you redeploy containers, one at the time
17:18:33 <Jeffrey4l> when finished.
17:18:40 <unicell> inc0: was trying to understand better about the upgrade
17:18:46 <elemoine> http://sdake.io/2015/01/28/an-atomic-upgrade-process-for-openstack-compute-nodes/ is what sdake wrote on the subject some time ago
17:19:02 <unicell> does this upgrade allows us to upgrade some opernstack service while running others?
17:19:04 <inc0> if you have 3 keystones, only 1 of them will be down in any given moment
17:19:15 <inc0> unicell, that's how it works
17:19:23 <inc0> we upgrade 1 service at the time
17:19:35 <unicell> I mean, ceph could have dependency on kolla-ansible
17:19:35 <inc0> first keystone, then after we're done we start with glance
17:19:46 <unicell> not the multiple instance of the same service part
17:19:47 <sdake> elemoine thanks i had forgot about that, it has screen hsots of proof that compute vms do not die during a libvirt upgrade
17:20:10 <inc0> unicell, we try to keep containers independent
17:20:32 <inc0> kolla-ansible is for example just run as a bootstraping tool, after that it just hangs in there
17:20:45 <inc0> there is no container-to-container communication
17:21:30 <unicell> there's ansible module inside kolla ansible container
17:21:53 <inc0> yeah, and it's main task is to create used for service in ks and so on
17:22:10 <inc0> and that's not even run during upgrade, that's a bootstrap task
17:22:35 <Jeffrey4l> kolla-ansible -> docker.sock(mount from the host) -> other container.   this is how it work.
17:22:55 <inc0> it doesn't really do this even
17:23:01 <unicell> ok, I think I get your point, containers should be independent most of the time
17:23:02 <inc0> it does call APIs
17:23:14 <inc0> with shade which is ansible module for openstack
17:23:35 <sdake> ok upgrades beaten to death
17:23:41 <sdake> anyone else have topics to discuss?
17:23:42 <inc0> unicell, main benefit you have from containers is separation
17:24:13 <unicell> there's still cases openstack service have cross component dependencies, right?
17:24:22 <schwarzm> sdake: there is this kolla-kubernetes bp .. is there any prior art ?
17:24:56 <sdake> i am not familiar with this blueprint but i am not willing to take on something as complex as that at the end of the dev cycle
17:24:57 <inc0> unicell, depends what do you mean
17:25:03 <sdake> especially when we he so much work to do schwarzm
17:25:27 <sdake> but kolla started out as containers on kuberntes
17:25:37 <inc0> in general services communicating either over API or queue, and both aren't contianer'ish dependency
17:25:43 <sdake> and it didnt work because kubernetes didnt support pid=host nd net=host
17:26:02 <sdake> now kubernetes supports those things from what i hear
17:26:07 <schwarzm> that is if the components using this are inside the containerized workload
17:26:08 <Jeffrey4l> what looks like in the final? A web-base tools like fuel? Or a api-base service, which can be used by other deployment tool?
17:26:11 <rhallisey> sdake, ya it does
17:26:22 <Jeffrey4l> for kolla
17:26:26 <schwarzm> we for instance would use a vcenter as backend
17:26:34 <nihilifer> Jeffrey4l: i guess we'll discuss this od midcycle
17:26:40 <schwarzm> which would eliminate this need
17:26:57 <schwarzm> but ok if there is nothing i could look at i will keep an eye on the bp
17:27:17 <nihilifer> Jeffrey4l: but yes, at least for kolla-mesos i'd like to have an api to pass configuration without /etc files
17:27:19 <rhallisey> schwarzm, did you have any questions pertaining kolla-kube?
17:27:22 <sdake> schwarzm are you interested inworking on this work?
17:27:33 <rhallisey> schwarzm, it's still in the early stages
17:27:37 <nihilifer> Jeffrey4l: to make kolla consumable by i.e. fuel
17:27:44 <unicell> inc0: I haven't give it try for tempest in kolla, but does it actually work and can ensure entire kolla setup working as expected?
17:27:45 <rhallisey> I'm going to make a run at it in Feb
17:27:55 <Jeffrey4l> nihilifer, cook
17:27:58 <schwarzm> i will take a good look at that
17:27:58 <Jeffrey4l> cool.
17:28:01 <rhallisey> as long as updates and root priv stuff is done
17:28:19 <sdake> unicell the gates dont have enough ram to run tempest
17:28:28 <sdake> so if yo uwant ot solve that problem start by reducing ram in teh gates
17:29:00 <unicell> sdake: we don't have multinode setup in gate either, correct?
17:29:20 <inc0> unicell, talk with pbourke about that, he is guy to talk to:)
17:29:27 <sdake> unicell not yet
17:29:29 <sdake> ok times up
17:29:33 <sdake> lets flow over to #kolla
17:29:37 <sdake> #endmeeting