16:01:09 #startmeeting interopwg 16:01:09 Meeting started Wed Sep 26 16:01:09 2018 UTC and is due to finish in 60 minutes. The chair is eglute. Information about MeetBot at http://wiki.debian.org/MeetBot. 16:01:10 Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. 16:01:13 The meeting name has been set to 'interopwg' 16:01:26 #chair hogepodge 16:01:27 Current chairs: eglute hogepodge 16:01:31 #topic agenda 16:01:44 #link https://etherpad.openstack.org/p/InteropWhistler.19 16:01:47 Hello Everyone! 16:02:40 Let me know if you are here for interop meeting! 16:03:08 hogepodge are you around? 16:03:45 o/ 16:03:53 Hello markvoelker! 16:03:56 #chair markvoelker 16:03:57 Current chairs: eglute hogepodge markvoelker 16:04:07 looks like it is just the two of us here today 16:04:43 =) I say we assign all the action items to hogepodge and call it a day then. =p 16:05:02 Sounds great to me! 16:05:15 #action hogepodge will do all the things 16:05:18 :D 16:05:36 thanks for submitting the nova update 16:05:56 i left a comment, i worry it would impact public clouds 16:06:26 Hm, I'm curious about that. The capability isn't backend-specific so disabling it would be a policy decision, right? 16:06:48 #link https://review.openstack.org/#/c/601638/ Add compute-servers-create-multiple as advisory 16:07:22 yes, most likely policy? wonder if public clouds allow it 16:07:28 maybe a non-issue 16:07:35 hi! 16:07:56 hello Chris! 16:07:56 Do you know of public clouds that don't? 16:08:18 markvoelker i do not. i need to test it on rackspace cloud to see if they allow it 16:09:07 Ok, that'd be good...I can try to poke at a few public clouds I have accounts on too but seems like I recall using this before with them. 16:09:32 ah ok. i will check and update 16:10:12 hogepodge you mentioned using compter-servers-create-multiple fairly regularly, right? Mostly on a private setup though? 16:10:23 yeah, I use it in my ironic lab 16:10:30 ah, ok 16:11:04 I also use terraform a lot, but I don't know if they do it as a batch or as a loop 16:11:45 So since eglute raised a concern about whether public clouds support this, I suggest taking a look around at a few and maybe pinging a few contacts we have. Worst case, it'll be advisory and operators will shout then so we can strike it before it becomes required. 16:12:04 sounds good to me 16:13:32 ok, since there are at least 3 of us here, would either markvoelker or hogepodge give a PTG summary? 16:13:41 sure 16:14:01 #link https://etherpad.openstack.org/p/InteropRefstackPTGDenver_2018 Denver PTG etherpad 16:14:24 Most of the morning we used as a working session to clear our backlog of reviews and discuss a few outstanding items 16:14:43 As a result, we merged several patches and submitted a few more follow-up patches which have also mostly merged 16:14:59 thanks for that! 16:15:00 These included some Cinder stuff, the most notable being the deprecation of the v2 API from the required list 16:15:25 We also decided to target the November BoD meeting for the next guideline approval 16:15:43 sounds good 16:16:00 Once we decide on the create-compute-servers-multiple patch I'll likely get a patch ready to create the formal document 16:16:14 thanks markvoelker 16:16:56 Later in the day we also chatted with Georg about NFV stuff, and I chatted again with Ildiko over breakfast the next day 16:17:33 Neither Georg or I have had as much time as we'd have liked to work on an NFV program, and we're not currently seeing a lot of noise about from the community side either 16:17:53 ok. any other take aways from NFV discussions? 16:18:32 We are seeing, for example, a lot of telco requirements that have a lot of similarities and could be simplified if we could check a box stating compliance with an interop program, so feels like there's still some demand out there 16:18:59 We discussed that maybe a formal interop program isn't the way to get started though 16:19:50 Perhaps, for example, we might do some sort of event (imagine, for example, a sort of interop challenge where the workload was a VNF like Clearwater or some such) to drum up interest, then see who's willing to put some further work in 16:20:10 ok, so pause on the work on formal NFV program for now? 16:20:19 i do like the idea of an event 16:20:30 How to go about this would likely be a discussion we need to have with the Foundation folks and/or BoD, so we may want to talk about it at the November meeting 16:21:03 ok, that sounds good 16:21:05 thank you 16:21:12 We also chatted a bit about some different stakeholders that we havent' spent much time with so far...for example, VNF vendors 16:21:22 anything else from the PTG? 16:22:01 I think that's about it...we also chatted with gema a bit about ARM and some questions around image interop 16:22:39 what were the image interop questions? 16:23:14 can you call clouds interoperable if they can't boot the same image (arm vs x86) 16:23:24 Just about whether images themselves ought to be part of an interop test...e.g. if I have an x86 cloud, the cirros image I boot on it wont' work on my ARM cloud 16:23:38 But the API to upload it, boot a server from it, etc do work 16:23:48 Came down to a question of API vs the data you feed to it 16:23:58 right, that makes sense 16:24:02 my opinion is that any api that takes bad data will barf. arm image to x86 cloud is bad data 16:24:17 ++ 16:24:37 how many clouds are using arm? any idea? 16:24:48 Two as far as I know so far. 16:24:52 Linaro and Vexxhost 16:24:54 Linaro and vexxhost 16:25:00 *jinx 16:25:07 Aww, now I can't speak 16:25:14 =p 16:25:22 hahah 16:25:25 It's ok, it's IRC and you can still type 16:25:39 #link http://wondermark.com/378/ jinx 16:26:05 so how about VMs? they would face the same issue right, arm vs x86? 16:27:05 Not sure I follow? 16:27:25 kvm vs vmware vs xen, for example 16:27:41 true... i guess we dont address that right now anyways 16:27:52 We kind of covered that too. Discovery of expected image types is cloud specific, but there's no API to expose that implementation afaik 16:28:13 So it's up to the operator to provide that information. 16:28:17 Oh, ok....right. YEah, same deal: the API's work, but you can feed any API parameters that don't make sense and things break. 16:28:26 right 16:28:39 We actually mentioned things like image conversion that work for the hypervisor case, fwiw. Not so much for the processor architecture case. 16:29:06 But the main point was that the API's are available and work and the tempest tests pass. 16:29:16 Regarding architecture: images are supposed to set architecture, and nova scheduler does the right thing. When folk upload images without architecture hints, they are scheduled on the default architecture for the cloud. Accidents should only happen for misconfigured cloudds. Interop probably benefits from setting architecture on images to avoid the problem. 16:29:31 This is an instance where clouds may behave differently. Some clouds do conversion in the background. 16:30:14 Do tests exist in tempest to exercise that capability persia? 16:31:30 hogepodge: That I don't know. Sadly, somewhere between Grizzly and Queens, we lost track of architecture, and all the docs stopped recommending setting it: I don't know if the tempest multiarch tests degraded. For Rocky we now have the concept of a "default" architecture for a given cloud to work around that bug, but I don't know that any tempest tests were added to test that bit. 16:31:32 i am doing migrations now from old openstack versions to newer ones, and in some cases have to rely on glance images working properly. however, have not had to deal with different architectures. other than that, i must say that openstack is mostly interoperable between different openstack providers/deployers/operators as far as i have seen 16:34:11 i have not been setting architecture for images i dont think. I am not even sure you can pass architecture as a parameter on image create 16:34:47 Note that it *is* possible to construct an UEFI image that can boot for multiple architectures, but I don't think glance supports this (other than by not specifying architecture). 16:35:07 eglute: You very much can pass architecture on image storage in glance: it's one of the core properties. 16:35:44 thanks persia i will need to look a little more into it 16:36:55 persia i am not seeing it in the api. is that part of glance.conf, or something else? https://developer.openstack.org/api-ref/image/v2/index.html#images 16:36:57 eglute: Feel free to ask me (probably in #openstack-dev) if you have questions about it. 16:37:05 thanks persia 16:37:35 eglute: https://docs.openstack.org/glance/latest/admin/useful-image-properties.html 16:38:06 On my queens cloud arch is required. I did have to set hypervisor type on flavors for baremetal. I wonder if defaults are x86 centric 16:38:18 meant to say arch is not required 16:38:34 thanks markvoelker 16:39:13 eglute: Look under "show detail" in GET /v2/schemas/images 16:39:40 So let's see....past that I think about the only thing open is "make consistency job gating" patch that hogepodge was working on.... 16:39:55 #link https://review.openstack.org/#/c/601633/ Make consistency job gating 16:40:07 thanks persia!! makes sense 16:40:20 hogepodge: just chugging along iterating on patchsets, looks like? 16:40:37 thanks hogepodge for creating the patch 16:40:48 Once this lands (I just sent an update to address the issues this morning), we can add to Tempest. Talked with gmann and they're ok with making it a job in their repo once it's running. 16:41:23 great, thanks hogepodge 16:41:49 hopefully I got my ansible/yaml variable syntax right 16:43:02 hogepodge do you think there will be keystone or swift updates for the next guideline? 16:43:31 I'm not expecting any, but I haven't given it the attention it deserves. 16:44:03 ok... i expect no major changes, but would be good to check 16:44:55 anything else today? 16:45:05 markvoelker anything for heat? 16:45:52 Nope, not expecting anything. Chatted with Rico a bit in the hallway. Since we're not doing resource validation this cycle there's not likely to be any changes. 16:46:07 great thank you 16:46:27 i think that is it then unless someone has anything else for today 16:47:01 I should add: no changes in terms of new stuff being added, though we probably do need to promote some of the advisory stuff 16:47:17 I have an AI to do that but didn't get to it before dashing home for the hurricane 16:47:23 thanks 16:47:52 markvoelker hopefully no more hurricanes for you this year 16:48:30 thanks everyone! 16:48:32 #endmeeting