09:58:58 #startmeeting containers 09:58:59 Meeting started Tue Jun 12 09:58:58 2018 UTC and is due to finish in 60 minutes. The chair is strigazi. Information about MeetBot at http://wiki.debian.org/MeetBot. 09:59:00 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 09:59:03 The meeting name has been set to 'containers' 09:59:04 #topic Roll Call 09:59:15 o/ 09:59:19 o/ 09:59:56 o/ 10:00:49 Thanks for joining the meeting brtknr flwang1 :) 10:00:51 #topic Announcements 10:01:04 0/ 10:01:05 Magnum is on storyboard \m/ 10:01:20 Yay 10:01:21 strigazi: Hurrah! Including blueprints> 10:01:34 s/>/? 10:01:48 I also moved the bps. Need to reset some statuses 10:02:06 eg, this was a BP https://storyboard.openstack.org/#!/story/2002210 10:02:08 That way we can setup cross project task cross Heat and magnum 10:02:37 brtknr: the problem is that what did was: 10:03:02 bp_status -> story_status -> task_status 10:03:28 but story_status is implicit and the status info got lost in the way 10:03:46 Here is an Etherpad I build for Heat most information also apply in magnum as well 10:03:50 https://etherpad.openstack.org/p/Heat-StoryBoard-Migration-Info 10:04:12 stotyboards client is pretty sweet so I'm scripting it again 10:04:18 thanks ricolin 10:04:55 questions about story board? 10:05:37 #topic Blueprints/Bugs/Ideas 10:06:46 I pushed a patch for uprgades, you can have a a look in the story above 10:07:19 oh, it is not there, I didn't add the task, but here it is: 10:07:24 strigazi: thanks, but i can see the new api to list the valid versions can be upgraded to 10:07:27 #topic Blueprints/Bugs/Ideas 10:07:30 #topic Blueprints/Bugs/Ideas 10:07:34 https://review.openstack.org/514959 10:08:15 flwang1: which api? list versions? 10:08:34 strigazi: yes 10:09:32 We don't have such thing in the spec, users can see versions in cluster templates 10:10:06 ok, we can discuss it offline 10:10:33 ok, this is the api side 10:10:38 on the driver side, 10:11:01 I have some issues when replacing nodes 10:11:23 We are setting the node name so: 10:12:03 1. the k8s cloud provider complains that it sees many nodes with the same name 10:12:31 2. in clouds that the nova name matche dns name, vm creation won't work 10:12:59 does these two make sense? 10:13:06 s/does/do 10:13:43 yes 10:14:23 strigazi: do we have to use the same name? 10:14:27 So, we need to change the name if we want heat to do the rolling replacement magic 10:15:01 flwang1: we don't but right now we do 10:15:08 ok 10:15:11 flwang1: and it is more user friendly 10:15:27 i don't really think it's user friendly 10:15:31 Is that the same thing that we talk about easier? 10:15:33 it could be confusing 10:15:39 for users to see -some-ids 10:15:47 ricolin: yes 10:17:05 flwang1: you this name is better ku-jamh4gh6zb-0-uwgdrnyx3azd-kube-master-vnzxh43pokl4 than 10:17:09 I’m thinking maybe we can have a Pre-action before update replace forSD 10:17:38 this one strkube-scanner-ne7at5fdzfhy-master-0 10:17:48 my cluster is named strkube-scanner 10:18:03 no, i mean only change the random id part 10:18:13 we should still use the cluster name as the prefix 10:18:23 we need something random 10:18:39 we can discuss offline 10:18:58 ricolin: there are two things 10:19:56 ricolin: one is the name issue the other is the SD dependency which I think it is not a problem in the end, I think it was an issue because of the nameing bug 10:20:34 I'll propose something in the storyboard we can take it there 10:21:02 This is it from me, last week was upgrades and storyboard 10:21:03 * ricolin using my phone so can’t really type fast:) 10:21:35 flwang1: go next? 10:21:42 sure 10:21:52 i'm working on many things recently 10:22:05 I saw, that is awesome 10:22:11 thanks :) 10:22:34 1. the keystone k8s integration, lingxian has done a great job to support configmap for k8s-keystone-auth 10:23:00 with that case, we could be more easier to integrate it, i will add more patch sets later 10:23:49 2. I'm still working on the FEK stuff, to support logging, but i may hold it a bit due to limited bandwidth recently 10:24:12 3. the health_status and health_status_reason attributes 10:24:32 it's going well, and Ricardo left some great questoins 10:24:55 i will discuss it with strigazi and ricardo 10:25:20 #link https://review.openstack.org/#/c/570818/ 10:25:20 4. deprecate the send_cluster_metrics, it's ready for review 10:25:53 5. fix the race condition issue when creating multi masters https://review.openstack.org/573639 10:26:02 it's ready for review, i have tested it locally, it works fine 10:26:35 6. make the etcd lb optional, see https://review.openstack.org/574540 it's still working in progress 10:26:45 that's all from my side, thanks 10:27:12 thanks flwang1 10:27:37 brtknr: want to bring up something about cgroups/ docker-ce? 10:29:32 let's move to open discussion then 10:29:38 #topic Open Discussion 10:29:50 Again, the meeting time 10:31:01 i'd like to discuss the naming convention of those lovely scripts https://review.openstack.org/562454 10:31:21 Recently a US company showed some interest and started to contrubute and they try to get involved, should we alternate bi-weekly? 10:31:27 Sorry I was AFK briefly 10:31:58 strigazi: i would prefer to avoid bi-weekly alt 10:32:08 because based on my experience, it doesn't work well 10:32:22 how can we get them though? Two weekly meetings? 10:32:26 people are always confused 10:32:42 they can popup anytime actually 10:32:47 it's an IRC channel 10:33:20 what's their time based on our current meeting time? 10:33:21 yes, but we are not always online :) 10:33:37 do you know their tz? 10:33:40 california 10:33:58 strigazi: 3:33 am in california right now 10:34:48 ok, for that case, we can try bi-weekly alt 10:35:02 or two meetings? 10:35:12 i don't mind 2 meetings actually 10:35:38 i dont mind 2 meetings either, im probably only going to come to one appropriate to my timezone 10:35:43 with two meetings, no one will show up and there won't be a meeting 10:36:14 we can 1600 ot 1700 UTC for them 10:36:22 we can do 1600 or 1700 UTC for them 10:36:34 flwang1: where are you now? 10:36:46 brtknr: NZ 10:36:49 even 1800 UTC 10:37:00 so its 10pm there? 10:37:09 10:37PM 10:37:28 wow true dedication! 10:37:50 +1 to 1700 UTC 10:38:05 brtknr: no worries, strigazi will buy me a beer 10:38:13 flwang1: or many 10:38:18 on berloin 10:38:20 on berlin 10:38:30 strigazi: ok, logged 10:38:37 Berlin!!! 10:38:38 flwang1: 1700 utc for you? 10:38:48 strigazi: that works for me 10:39:00 flwang1: thats 5am! 10:39:16 how about sunday? 10:39:26 brtknr: it's true, i generally start up at 5 am 10:39:39 Ok. I can send an email and ask them 10:39:55 whats the company called? 10:39:58 in the ML 10:40:05 Blizzard 10:40:08 brtknr: it doesn't work haha, i need to have time with kids 10:40:35 strigazi: ah, Blizzard 10:40:37 wow! cool! world of worldcraft blizzard? 10:40:44 wow 10:40:58 brtknr: yes, they are using openstack a lot 10:41:00 Yes 10:41:20 i was joking about sunday 10:41:33 brtknr: i know, no worries 10:42:10 maybe friday 5am is best? 10:42:35 Friday sounds good 10:42:41 we can do thursday, friday for flwang1 10:42:48 end of week, enough gap from tuesday 10:42:54 yep 10:43:17 let's see, I'll send an email 10:43:31 brtknr: want to discuss anything? 10:43:43 i'm still waiting for review 10:43:53 on the two things 10:44:07 thanks for the earlier comments 10:44:18 brtknr: your patches on my list, but i need to find a time slot, sorry for that 10:44:21 flwang1: regarding the script name I'll check the patch, standards are good. We can make them SD as well 10:44:28 flwang1: since we touch them 10:45:10 brtknr Do you have time to add docker-ce? is it in your lsit? 10:45:24 strigazi: docker-ce is on my list 10:45:40 brtknr: we can do containerd too 10:46:01 i'll read up on containerd 10:46:01 brtknr: What we need is to setup builds for the containers 10:46:11 strigazi: thanks, pls see my comments in the renaming patch when you have time 10:46:17 at creation? 10:46:31 rather than pull image from a source? 10:46:37 brtknr: no, just a CI 10:46:49 brtknr: to populate the source 10:47:09 strigazi: who will host the source? 10:47:24 I'll see with the infra team how to setup 10:47:27 strigazi: cern? docker.io? 10:47:37 brtknr: now we have docke.io 10:47:48 strigazi: is cern using a private registry? 10:47:49 we can also use quay.io push to both 10:47:55 flwang1: yes 10:48:03 what's it? 10:48:07 gitlan 10:48:10 gitlab 10:48:29 it implement docker registry v2 10:48:31 cern's registry is super fast compared to docker.io 10:48:32 it implements docker registry v2 10:49:10 brtknr: yes but we can't rely on it as a an OS project 10:49:32 we're going to use Harbour 10:49:34 strigazi: i was making an observation! 10:49:36 from vmware 10:49:57 brtknr: let's setup the ci and we can push anywhere we want 10:50:10 sounds good 10:50:45 flwang1: is it docker registry v2? 10:51:15 IIRC, yes 10:51:19 i will double check 10:51:52 ok, last thing about the registry, should we have a repo for container image builds? 10:52:06 i'm probably going to be learning as i go along so i'll have lots of questions probably 10:52:31 flwang1: you think it is complicated? 10:52:36 could you elaborate the question? 10:52:41 strigazi: it would be nice to have a repo 10:52:50 brtknr: have a new repo 10:52:54 or we can hold it magnum repo initially 10:52:58 opestack/magnum-containers 10:53:01 strigazi: it's not complicated 10:53:08 cool 10:53:11 oh okay i see what you mean 10:53:25 let's start in o/m and we see 10:53:40 You can build a periodic job to build it 10:53:43 it would be nice if we can have a CI to public images automatically 10:53:55 flwang1: that is the goal 10:53:55 yeah dont see any immediate need to separate repos 10:54:03 brtknr: +1 10:54:17 strigazi: although i was wondering if fedora atomic elements are still necessary 10:54:28 strigazi: for diskimagebuilder 10:54:42 strigazi: since the image works out of the box now 10:54:45 brtknr: they are not 10:55:35 strigazi: should we drop them? 10:56:16 brtknr: sure 10:56:42 we can add them back if someone needs them 10:57:05 when i got started with magnum, it was quite a distraction as I started building fa27 image using the elements then later realised it wasn't required at all 10:57:21 anyhting else for the meeting? 10:57:30 brtknr: I have spent quite some time with it 10:57:38 brtknr: it wasn't funny 10:57:46 are you talking about this https://github.com/openstack/magnum/tree/master/magnum/drivers/common/image ? 10:58:03 flwang1: yes /fedora-atomic not the agent 10:58:09 ok, got 10:58:12 we don't need it 10:58:37 we may need better tagging for the heat-container-agent btw 10:59:52 nothing from my side 10:59:58 okay great 11:00:07 flwang1: we can iterate when we have builds 11:00:15 strigazi: cool 11:00:25 cool, thanks guys 11:00:32 #endmeeting