*** PagliaccisCloud has joined #openstack-containers | 00:52 | |
*** hongbin has quit IRC | 01:00 | |
*** ianychoi has quit IRC | 01:20 | |
*** ricolin has joined #openstack-containers | 03:12 | |
*** hongbin has joined #openstack-containers | 03:15 | |
*** ykarel|away has joined #openstack-containers | 03:42 | |
*** udesale has joined #openstack-containers | 03:48 | |
*** ramishra has quit IRC | 03:59 | |
*** ricolin has quit IRC | 04:03 | |
*** udesale has quit IRC | 04:08 | |
*** udesale has joined #openstack-containers | 04:43 | |
*** lpetrut has joined #openstack-containers | 04:55 | |
*** hongbin has quit IRC | 05:07 | |
*** ykarel|away has quit IRC | 05:19 | |
*** ramishra has joined #openstack-containers | 05:26 | |
*** ykarel|away has joined #openstack-containers | 05:35 | |
*** lpetrut has quit IRC | 05:35 | |
*** ykarel|away is now known as ykarel | 05:42 | |
*** sdake has quit IRC | 06:35 | |
*** sdake has joined #openstack-containers | 06:40 | |
*** rcernin has quit IRC | 06:43 | |
*** mkuf_ has joined #openstack-containers | 06:58 | |
*** mkuf has quit IRC | 07:03 | |
*** ramishra has quit IRC | 07:15 | |
*** ykarel is now known as ykarel|lunch | 07:31 | |
*** mkuf_ has quit IRC | 07:52 | |
*** mkuf_ has joined #openstack-containers | 07:54 | |
*** ramishra has joined #openstack-containers | 08:01 | |
*** mgoddard has quit IRC | 08:10 | |
*** ykarel|lunch is now known as ykarel | 08:30 | |
*** ramishra has quit IRC | 08:44 | |
*** ramishra has joined #openstack-containers | 08:51 | |
*** lpetrut has joined #openstack-containers | 09:21 | |
openstackgerrit | Bharat Kunwar proposed openstack/magnum stable/queens: k8s_fedora: Add cloud_provider_enabled label https://review.openstack.org/624132 | 09:29 |
---|---|---|
*** shrasool has joined #openstack-containers | 09:56 | |
*** salmankhan has joined #openstack-containers | 10:08 | |
*** salmankhan has quit IRC | 10:21 | |
*** salmankhan has joined #openstack-containers | 10:21 | |
*** shrasool has quit IRC | 10:26 | |
*** shrasool has joined #openstack-containers | 10:37 | |
*** udesale has quit IRC | 10:56 | |
*** tobias-urdin is now known as tobias-urdin|lun | 11:00 | |
*** tobias-urdin|lun is now known as tobias-urdin_afk | 11:01 | |
*** shrasool has quit IRC | 11:21 | |
*** shrasool has joined #openstack-containers | 11:22 | |
*** shrasool has quit IRC | 11:26 | |
*** tobias-urdin_afk is now known as tobias-urdin | 11:27 | |
*** salmankhan has quit IRC | 11:46 | |
*** salmankhan has joined #openstack-containers | 11:50 | |
*** mkuf has joined #openstack-containers | 12:26 | |
*** mkuf_ has quit IRC | 12:30 | |
*** dave-mccowan has joined #openstack-containers | 12:53 | |
*** dave-mccowan has quit IRC | 13:01 | |
*** ykarel is now known as ykarel|afk | 13:12 | |
*** zul has quit IRC | 13:26 | |
*** zul has joined #openstack-containers | 13:26 | |
*** ykarel|afk has quit IRC | 13:57 | |
*** salmankhan has quit IRC | 14:01 | |
*** salmankhan has joined #openstack-containers | 14:01 | |
*** mkuf_ has joined #openstack-containers | 14:06 | |
*** dave-mccowan has joined #openstack-containers | 14:07 | |
*** mkuf has quit IRC | 14:09 | |
*** mkuf has joined #openstack-containers | 14:14 | |
*** dave-mccowan has quit IRC | 14:14 | |
brtknr | mnaser: vexxhost is even involved in https://github.com/gophercloud/gophercloud | 14:14 |
brtknr | awesome! | 14:14 |
*** mkuf_ has quit IRC | 14:17 | |
*** shrasool has joined #openstack-containers | 14:18 | |
*** ykarel|afk has joined #openstack-containers | 14:18 | |
*** udesale has joined #openstack-containers | 14:33 | |
*** ykarel|afk is now known as ykarel | 14:37 | |
openstackgerrit | Merged openstack/magnum master: Add iptables -P FORWARD ACCEPT unit https://review.openstack.org/619643 | 14:42 |
*** salmankhan has quit IRC | 14:46 | |
mnaser | brtknr: yep :) | 14:58 |
*** salmankhan has joined #openstack-containers | 14:59 | |
brtknr | looks like cloud-controller-manager using gophercloud to interact with openstack API | 15:02 |
*** itlinux has quit IRC | 15:06 | |
*** salmankhan1 has joined #openstack-containers | 15:20 | |
*** salmankhan has quit IRC | 15:20 | |
*** salmankhan1 is now known as salmankhan | 15:20 | |
*** mkuf has quit IRC | 15:22 | |
*** mkuf has joined #openstack-containers | 15:28 | |
*** munimeha1 has joined #openstack-containers | 15:32 | |
*** ivve has joined #openstack-containers | 15:37 | |
*** ykarel is now known as ykarel|away | 15:38 | |
*** munimeha1 has quit IRC | 16:11 | |
*** udesale has quit IRC | 16:14 | |
openstackgerrit | weizj proposed openstack/magnum master: Remove the space to keep docs consistence with others https://review.openstack.org/596734 | 16:19 |
*** hongbin has joined #openstack-containers | 16:19 | |
*** itlinux has joined #openstack-containers | 16:22 | |
*** mriedem has joined #openstack-containers | 16:52 | |
mriedem | looking for some magnum cores to check out the upgrade-checkers community wide goal framework patch https://review.openstack.org/#/c/611505/ | 16:52 |
mriedem | hongbin: ^ | 16:52 |
mriedem | you should be familiar b/c of the zun one | 16:52 |
hongbin | mriedem: ack | 16:52 |
hongbin | mriedem: lgtm | 16:54 |
mriedem | thanks | 16:55 |
*** mriedem has left #openstack-containers | 16:57 | |
*** itlinux_ has joined #openstack-containers | 16:59 | |
*** itlinux has quit IRC | 17:03 | |
*** zul has quit IRC | 17:20 | |
*** ykarel|away has quit IRC | 17:35 | |
*** salmankhan has quit IRC | 17:51 | |
*** lpetrut has quit IRC | 17:56 | |
*** shrasool_ has joined #openstack-containers | 18:43 | |
*** shrasool has quit IRC | 18:45 | |
*** shrasool_ is now known as shrasool | 18:45 | |
*** lbragstad has quit IRC | 19:30 | |
*** lbragstad has joined #openstack-containers | 19:31 | |
*** munimeha1 has joined #openstack-containers | 19:39 | |
*** zul has joined #openstack-containers | 20:05 | |
*** shrasool has quit IRC | 20:34 | |
*** shrasool has joined #openstack-containers | 20:35 | |
openstackgerrit | Spyros Trigazis proposed openstack/magnum master: k8s_fedora: Use external kubernetes/cloud-provider-openstack https://review.openstack.org/577477 | 20:44 |
*** flwang has joined #openstack-containers | 20:48 | |
flwang | strigazi: do we have meeting today? | 20:48 |
strigazi | flwang: yes | 20:55 |
strigazi | flwang: would this be useful? https://indico.cern.ch/category/10892/ | 20:55 |
strigazi | flwang: I can send automatic email too, indico is an open-source tool developed at CERN that we use to manage meeting, conferences, lectures etc | 20:56 |
flwang | strigazi: nice | 20:58 |
strigazi | #startmeeting containers | 21:00 |
openstack | Meeting started Tue Dec 11 21:00:29 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. | 21:00 |
openstack | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 21:00 |
*** openstack changes topic to " (Meeting topic: containers)" | 21:00 | |
strigazi | #topic Roll Call | 21:00 |
openstack | The meeting name has been set to 'containers' | 21:00 |
*** openstack changes topic to "Roll Call (Meeting topic: containers)" | 21:00 | |
strigazi | o/ | 21:00 |
cbrumm_ | o/ | 21:00 |
flwang | o/ | 21:01 |
strigazi | #topic Announcements | 21:02 |
*** openstack changes topic to "Announcements (Meeting topic: containers)" | 21:02 | |
strigazi | None | 21:03 |
strigazi | #topic Stories/Tasks | 21:03 |
*** openstack changes topic to "Stories/Tasks (Meeting topic: containers)" | 21:03 | |
strigazi | flwang: mnaser pushed some patches to impove the CI | 21:03 |
strigazi | #link https://review.openstack.org/#/q/topic:k8s-speed+(status:open+OR+status:merged) | 21:03 |
strigazi | the last five is kind of refactor, the first ones are functional tests only. | 21:05 |
strigazi | With nested virt, the k8s test runs in ~40 min | 21:05 |
flwang | very cool | 21:05 |
flwang | i'm keen to review them | 21:05 |
strigazi | They are small actually, thanks | 21:06 |
flwang | strigazi: 40 is ok for now, we can improve it later | 21:06 |
strigazi | ~30 takes only to deploy devstack | 21:06 |
flwang | that's quite reasonable then | 21:07 |
flwang | i love then | 21:08 |
flwang | in the future, we probably can support sonobuoy | 21:08 |
strigazi | From my side I completed the patch to use the cloud-provider-openstack k8s_fedora: Use external kubernetes/cloud-provider-openstack https://review.openstack.org/577477 | 21:08 |
strigazi | I did it because I was testing lxkong patch for the delete hooks. | 21:09 |
flwang | strigazi: yep, i saw that, i just completed the review | 21:10 |
strigazi | cbrumm_: you are using this provider https://hub.docker.com/r/k8scloudprovider/openstack-cloud-controller-manager/ or the upstream from kubernetes ? | 21:11 |
flwang | commented | 21:11 |
cbrumm_ | looks good, we might have some small additions to make to the cloud controller so it will be good to have those in | 21:11 |
cbrumm_ | we're using the k8s upstream | 21:11 |
strigazi | cbrumm_: with with k8s release? | 21:11 |
strigazi | cbrumm_: with which k8s release? | 21:11 |
cbrumm_ | yeah, matched to the same version of k8s, so 1.12 for right now | 21:12 |
strigazi | I use the out-of-tree and out of k8s repo one. I think new changes go there, did I got this wrong? | 21:13 |
cbrumm_ | we use this one https://github.com/kubernetes/cloud-provider-openstack | 21:14 |
*** shrasool has quit IRC | 21:14 | |
strigazi | oh, ok, my patch is using that one, cool | 21:14 |
cbrumm_ | good, that one is where all the work is going according to the slack channel | 21:15 |
strigazi | that's it from me, flwang cbrumm_ anything you want to bring up? | 21:16 |
cbrumm_ | Nothing, I'm missing people at kubecon and paternity leave | 21:17 |
flwang | strigazi: the keystone auth patch is ready and in good shape in my standard | 21:17 |
flwang | may need small polish, need your comments | 21:17 |
strigazi | This kubecon in particural must be very cool | 21:17 |
strigazi | flwang: I'm taking a look | 21:18 |
flwang | strigazi: and i'd like to discuss the rolling upgrade and auto healing if you have time | 21:18 |
strigazi | flwang: the patch is missing only the tag (?). I just need to test it. | 21:20 |
flwang | which patch? | 21:21 |
strigazi | the keystone-auth one | 21:22 |
flwang | yep, we probably need a label for its tag | 21:22 |
flwang | https://hub.docker.com/r/k8scloudprovider/k8s-keystone-auth/tags/ | 21:24 |
strigazi | For upgrades, I'm finishing what we discussed two weeks ago. I was a little busy with the CVE and some downstream ticketing. It's going well | 21:24 |
flwang | cool | 21:24 |
flwang | does the upgrade support master nodes? | 21:25 |
strigazi | btw, I did this script to sync containers and create cluster fast http://paste.openstack.org/raw/737052/ | 21:25 |
strigazi | flwang: yes, with rebuild (if using a volume for etcd) and inplace | 21:26 |
flwang | nice, thanks for sharing | 21:26 |
*** shrasool has joined #openstack-containers | 21:26 | |
strigazi | flwang: when moving to minor releases inplace should be ok. | 21:27 |
strigazi | 1.11.5 to 1.11.6 for example | 21:27 |
flwang | i'm happy with current upgrade desgin | 21:27 |
flwang | do you want to discuss auto healing in meeting or offline? | 21:27 |
strigazi | let's do it now, he have an 1 hour meeting anyway | 21:28 |
strigazi | based on our experience with the k8s CVE | 21:28 |
strigazi | the healing and monitoting of the heatlth can be two separate things that are both required. | 21:29 |
strigazi | To clarify: | 21:29 |
flwang | i'm listening | 21:29 |
strigazi | monitoting the health status or even version? os useful for an operator | 21:29 |
strigazi | For the CVE I had to cook a script that checks all apis if they allow anonymous-auth | 21:30 |
strigazi | *all cluster APIs in our cloud. | 21:30 |
flwang | os useful for an operator ?? os=? | 21:30 |
strigazi | s/os/it is/ | 21:31 |
*** shrasool has quit IRC | 21:32 | |
strigazi | For autohealing, we could use the cluster autoscaler with node-detector and draino instead of writing a magnum specific authealing mechanism | 21:33 |
flwang | anybody working on autohealing now? i'm testing node problem detector and draino now | 21:35 |
strigazi | does this make sense? | 21:35 |
cbrumm_ | node problem detector is something we'll be taking a deeper look at in Jan | 21:36 |
flwang | for health status monitoring, do you mean we still prefer to keep the periodic job to check the health status in Magnum? | 21:36 |
strigazi | yes | 21:36 |
flwang | cbrumm_: cool, please join us to avoid duplicating effort | 21:36 |
flwang | strigazi: ok, good | 21:37 |
lxkong | strigazi: there are 2 levels for auto-healing, the openstack infra and the k8s, and also we need to take care of both masters and workers | 21:37 |
cbrumm_ | sure, first we'll just be looking at the raw open source product, we aren't far at all | 21:37 |
lxkong | i discussed auto-healing with flwang yesterday, we have a plan | 21:38 |
strigazi | what is the plan? | 21:39 |
strigazi | what is openstack infra a different level? | 21:40 |
lxkong | use aodh and heat to guarantee the desired node count, NPD/drainer/autoscaler for the node problems of workers | 21:42 |
strigazi | fyi, flwang lxkong if we make aodh a dependency, at least at CERN we will diverge, we just decommisioned ceilometer | 21:43 |
lxkong | but we need some component to revieve notification and trigger alarm action | 21:44 |
lxkong | either it's ceilometer or aodh or something else | 21:44 |
strigazi | two years ago we stopped collecting metrics with ceilometers. | 21:44 |
strigazi | two years ago we stopped collecting metrics with ceilometer. | 21:45 |
flwang | lxkong: as we discussed | 21:45 |
strigazi | for notifications we started using logstash | 21:45 |
strigazi | I don't think aodh is a very appealing option for adoption | 21:45 |
flwang | the heat/aodh workflow is the infra/physical layer healing | 21:46 |
lxkong | yep | 21:46 |
flwang | let's focus on the healing which can be detected by NPD | 21:46 |
flwang | since catalyst cloud doesn't have aodh right now | 21:46 |
strigazi | is there smth that NPD can't detect? | 21:46 |
flwang | strigazi: if the worker node lost connection or down suddenly, i don't think NPD have chance to detect | 21:47 |
lxkong | if the node on which NPD is running dies | 21:47 |
cbrumm_ | It can detect whatever you make a check for. It can use "nagios" style plugins | 21:47 |
flwang | since it's running as a pod on that kubelet, correct me if i'm wrong | 21:47 |
strigazi | flwang: if the node stopped reporting for X time it can be considered for removal | 21:48 |
flwang | by who? | 21:48 |
strigazi | autoscaler | 21:49 |
flwang | ok, i haven't add autoscaler into my testing | 21:51 |
flwang | if autoscaler can detect such kind of issue, then i'm ok with that | 21:51 |
strigazi | i couldn't find the link for it. | 21:52 |
flwang | that's alright | 21:52 |
strigazi | I think that we can separate the openstack specific checks from the k8s ones. | 21:52 |
flwang | at least, we're on the same page now | 21:52 |
strigazi | aodh or other solutions can take care of nodes without the cluster noticing | 21:52 |
flwang | NPD+draino+auotscaler and keep the health monitoring in magnum in parallel | 21:52 |
strigazi | yes | 21:53 |
flwang | how autoscaler deal with openstack? | 21:53 |
flwang | how autoscaler tell magnum i want to replace a node? | 21:53 |
*** schaney has joined #openstack-containers | 21:54 | |
strigazi | the current options is talking to heat directly, but we work on the nodegroups implementation and a method that the magnum api will expose a node removal api. | 21:54 |
flwang | strigazi: great, "the magnum api will expose a node removal api", we discussed this yesterday, we probabaly need an api like 'openstack coe cluster replace <cluster ID> --node-ip xxx / --node-name yyy | 21:56 |
*** itlinux_ has quit IRC | 21:57 | |
flwang | it would be nice if autoscaler can talk to magnum instead of heat because it would be easy for the auto scaling case | 21:57 |
flwang | for auto scaling scenario, magnum needs to know the number of master/worker nodes | 21:57 |
flwang | if autoscaler talks to heat dierectly, magnum can't know that info | 21:58 |
strigazi | in the autoscaler implementation we test here: https://github.com/cernops/autoscaler/pull/3 the autoscaler talks to heat and then to magnum to update the node count. | 21:58 |
*** rcernin has joined #openstack-containers | 21:59 | |
flwang | is it testable now? | 21:59 |
strigazi | yes, it is still WIP but it works | 22:00 |
flwang | fantastic | 22:02 |
flwang | who is the right person i should talk to ? are you working on that? | 22:02 |
strigazi | flwang: you can create an issue on github | 22:02 |
flwang | good idea | 22:03 |
strigazi | it is me tghartland and ricardo | 22:03 |
schaney | o/ sorry I missed the meeting, cbrumm sent me some of the conversation though. Was there any timeline on the nodegroup implementation in Magnum? | 22:03 |
strigazi | you can create one under cernops for now and then we can move things to k/autosclaer | 22:03 |
flwang | very cool, does that need any change in gophercloud? | 22:03 |
strigazi | schaney: stein and the sooner the better | 22:04 |
schaney | gohpercloud will need the Magnum API updates integrated once that's complete | 22:04 |
strigazi | flwang: Thomas (tghartland) updated the deps for gophercloud, when we have new magnum APIs we will have to push to gophercloud changes too | 22:05 |
schaney | awesome | 22:06 |
strigazi | schaney: you work on gophercloud? autosclaler? k8s? openstack? all the above?:) | 22:06 |
schaney | =) yep! hoping to be able to help polish the autoscaler once everything is released, we sort of went a different direction in the mean time (heat only) | 22:08 |
strigazi | cool, of you have input, just shoot in github, we work there to be easy for people to give input | 22:09 |
strigazi | cool, if you have input, just shoot in github, we work there to be easy for people to give input | 22:09 |
flwang | schaney: are you the young man we meet at Berlin? | 22:09 |
schaney | will do! | 22:09 |
flwang | strigazi: who is working on the magnum api change? | 22:09 |
schaney | @flwang I wasn't in Berlin. no, maybe Duc was there? | 22:10 |
flwang | is there a story to track that? | 22:10 |
flwang | schaney: no worries | 22:10 |
flwang | appreciate for your contribution | 22:10 |
strigazi | for nodegroups it is Theodoros https://review.openstack.org/#/q/owner:theodoros.tsioutsias%2540cern.ch+status:open+project:openstack/magnum | 22:11 |
flwang | for the node group feature, how the autoscaler call it to replace a single node? | 22:13 |
flwang | i think we need a place to discuss the overall design | 22:13 |
strigazi | For node-removal we need to document it explicitly. We don't have something. | 22:13 |
strigazi | I'll create a new story, I'll push a spec too. | 22:14 |
strigazi | based on discussions that we had in my team, the options are two: | 22:14 |
strigazi | one is to have an endpoint to delete a node from the cluster without passing the NG. The other is to remove a node from a NG explicitly. | 22:15 |
strigazi | We might need both actually, I'll write the spec, unless someone is really keen on writing it. :) | 22:16 |
strigazi | @all Shall we end the meeting? | 22:17 |
flwang | strigazi: let's end it | 22:17 |
schaney | sure yeah, thanks! | 22:17 |
strigazi | Thanks eveyone | 22:17 |
strigazi | #endmeeting | 22:17 |
*** openstack changes topic to "OpenStack Containers Team" | 22:17 | |
openstack | Meeting ended Tue Dec 11 22:17:44 2018 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 22:17 |
openstack | Minutes: http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-12-11-21.00.html | 22:17 |
openstack | Minutes (text): http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-12-11-21.00.txt | 22:17 |
openstack | Log: http://eavesdrop.openstack.org/meetings/containers/2018/containers.2018-12-11-21.00.log.html | 22:17 |
lxkong | strigazi: is now a good time for you to chat about the cluster pre-delete patch? It's per coe because the resource deletion is related to coe instead of drivers, e.g. there is no difference for k8s resource cleanup on fedora and coreos. | 22:18 |
lxkong | i didn't fully understand your concern | 22:18 |
flwang | strigazi: Catalyst Cloud is keen to support auto healing by Feb 2019, so i'm really interested in a solution which not hardly depending with node group | 22:18 |
strigazi | we don't have to have nodegroups, the autoscaler has the notion of NGs but for magnum we assume a single NG | 22:19 |
strigazi | lxkong: fedora and coreos will become one and in practice the coreos driver is not used unless someone had forked it. | 22:20 |
strigazi | lxkong: I would prefer to not have one more plugin mechanism and make the hook just a module in drivers. | 22:21 |
lxkong | strigazi: but different deployer may have different clean up requirment, if 'a module in drivers' can do that? | 22:22 |
strigazi | lxkong: if downstream someone wants a different implementation they can use the drivers plugin architecture to have specific changes | 22:22 |
strigazi | lxkong: different deployments can have different drivers, that is the point of having drivers | 22:23 |
lxkong | strigazi: do you mean create a folder under drivers in the repo? | 22:23 |
strigazi | the drivers are not that big, it is easier to fork a driver and these specific pre delete hooks | 22:24 |
strigazi | lxkong: yes | 22:24 |
strigazi | lxkong: this folder will contain the pre-delete functionality. | 22:24 |
lxkong | strigazi: it's very hard to have a reference implementation for the pre-delete, so you mean, if we have the requirment, we just create a folder by ourselves and don't need to have it upstream? | 22:25 |
*** itlinux has joined #openstack-containers | 22:26 | |
strigazi | let's narrow this down, does catalyst have a special dependency that can't go upstream? | 22:26 |
lxkong | e.g. we use octavia and we've patched kubelet | 22:26 |
lxkong | so that lb create can have a description including the cluster uuid | 22:26 |
lxkong | but for others, that may not true | 22:27 |
lxkong | also when deleting the cluster, we may not delete the cinder volumes for the pv, but maybe not the case for the others | 22:27 |
strigazi | that's ok, when my patch is in for the ccm, everyone will have the cluster id in the description | 22:28 |
lxkong | what if someone is using neutron-lbaas? | 22:28 |
lxkong | also what if someone like us are not using ccm? | 22:28 |
lxkong | i remember yesterday there's a guy said octavia is not a option for them :-( | 22:30 |
strigazi | isn't neutron lbaas deprecated? | 22:31 |
lxkong | yes, but i think many people are still using that | 22:31 |
lxkong | i also have concern that maybe in future, there are more integration between openstack and ccm | 22:32 |
lxkong | e..g barbican | 22:32 |
strigazi | lxkong: my point is that clouds that diverged from the upstream project can write their own hooks | 22:32 |
strigazi | using the driver plugin | 22:32 |
lxkong | yeah, i understand your concern | 22:33 |
lxkong | so do you think it's acceptable for my lb clean up implementation but move it to a package under derivers? | 22:35 |
flwang | strigazi: is your ccm patch backport-able? i mean mind me cherrypicking it after it merged? | 22:35 |
strigazi | flwang: it is, sure | 22:35 |
lxkong | s/derivers/drivers | 22:35 |
flwang | strigazi: cool | 22:35 |
strigazi | lxkong: I think we can cover, octavia as the first pre-deletion hook. As a next step, for storage we can have a configurable option to purge volumes. | 22:37 |
strigazi | lxkong: if people ask for more options we can implement those or ask them to contribute. | 22:38 |
strigazi | lxkong: the design of drivers was chosen, to version OS, COE and plugins as a unit | 22:39 |
lxkong | strigazi: ok, sounds good. I still not sure about where i should put the code, e.g. we care about k8s_fedora_atomic, so is it correct the code should live in magnum/drivers/k8s_fedora_atomic_v1/driver.py? | 22:41 |
lxkong | or a new folder called something like 'k8s_fedora_atomic_resource_cleanup_v1/driver.py' | 22:42 |
lxkong | the name looks ugly | 22:42 |
flwang | lxkong: strigazi: i think the general framework could be in the heat folder, but the octavia hook can be placed in to fedora_atomic folder, unless other user want to port it to other OS drivers | 22:44 |
*** itlinux has quit IRC | 22:45 | |
lxkong | flwang: that's also the way i prefer, but wanna double check with strigazi | 22:46 |
lxkong | because that will affect all users who are using fedora_atomic | 22:46 |
strigazi | we can put it in the heat folder | 22:47 |
strigazi | if there the cloud doesn't have octavia the hook will just pass right? | 22:47 |
strigazi | if the cloud doesn't have octavia the hook will just pass right? | 22:47 |
lxkong | strigazi: i can add that check | 22:48 |
strigazi | lxkong: I think it is the same trick as you did when octavia is enabled or not. | 22:48 |
flwang | strigazi: it should be, as I talked with lxkong, the design of hook shouldn't impact any current logic | 22:48 |
lxkong | strigazi: yes | 22:49 |
lxkong | strigazi: cool, all clear now, thanks for your time | 22:49 |
lxkong | i will udpate the patch accordingly | 22:49 |
strigazi | lxkong: cool | 22:49 |
strigazi | flwang: let's take mnaser's changes, I'm thrilled to have the k8s CI working again. | 22:50 |
strigazi | https://review.openstack.org/#/q/status:open+project:openstack/magnum+branch:master+topic:k8s-speed | 22:50 |
flwang | strigazi: no problem | 22:50 |
lxkong | strigazi, flwang: i've tested this patch https://review.openstack.org/#/c/623724/, but the `OS::Heat::SoftwareDeploymentGroup` doens't work in my devstack | 22:51 |
lxkong | that resource hangs until timeout | 22:51 |
lxkong | and heat-container-agent didn't get anything | 22:51 |
strigazi | lxkong: not this patch | 22:52 |
strigazi | lxkong: only the ones I +2, this patch needs work | 22:52 |
lxkong | it's not the functional ones | 22:52 |
lxkong | anyway, i will talk to mnaser, that's the one i'm interested we will see how much time it could save for clutser creation | 22:53 |
strigazi | I'm going to sleep guys, thanks for all the work, it is great working together | 22:54 |
lxkong | strigazi: have a good night | 22:54 |
*** shrasool has joined #openstack-containers | 22:55 | |
*** hongbin has quit IRC | 23:34 | |
*** hongbin has joined #openstack-containers | 23:35 | |
openstackgerrit | Merged openstack/magnum master: functional: retrieve cluster to get stack_id https://review.openstack.org/623575 | 23:51 |
*** hongbin has quit IRC | 23:58 |
Generated by irclog2html.py 2.15.3 by Marius Gedminas - find it at mg.pov.lt!