16:01:09 <eglute> #startmeeting interopwg
16:01:09 <openstack> Meeting started Wed Sep 26 16:01:09 2018 UTC and is due to finish in 60 minutes.  The chair is eglute. Information about MeetBot at http://wiki.debian.org/MeetBot.
16:01:10 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
16:01:13 <openstack> The meeting name has been set to 'interopwg'
16:01:26 <eglute> #chair hogepodge
16:01:27 <openstack> Current chairs: eglute hogepodge
16:01:31 <eglute> #topic agenda
16:01:44 <eglute> #link https://etherpad.openstack.org/p/InteropWhistler.19
16:01:47 <eglute> Hello Everyone!
16:02:40 <eglute> Let me know if you are here for interop meeting!
16:03:08 <eglute> hogepodge are you around?
16:03:45 <markvoelker> o/
16:03:53 <eglute> Hello markvoelker!
16:03:56 <eglute> #chair markvoelker
16:03:57 <openstack> Current chairs: eglute hogepodge markvoelker
16:04:07 <eglute> looks like it is just the two of us here today
16:04:43 <markvoelker> =)  I say we assign all the action items to hogepodge and call it a day then. =p
16:05:02 <eglute> Sounds great to me!
16:05:15 <eglute> #action hogepodge will do all the things
16:05:18 <eglute> :D
16:05:36 <eglute> thanks for submitting the nova update
16:05:56 <eglute> i left a comment, i worry it would impact public clouds
16:06:26 <markvoelker> Hm, I'm curious about that.  The capability isn't backend-specific so disabling it would be a policy decision, right?
16:06:48 <markvoelker> #link https://review.openstack.org/#/c/601638/ Add compute-servers-create-multiple as advisory
16:07:22 <eglute> yes, most likely policy? wonder if public clouds allow it
16:07:28 <eglute> maybe a non-issue
16:07:35 <hogepodge> hi!
16:07:56 <eglute> hello Chris!
16:07:56 <markvoelker> Do you know of public clouds that don't?
16:08:18 <eglute> markvoelker i do not. i need to test it on rackspace cloud to see if they allow it
16:09:07 <markvoelker> Ok, that'd be good...I can try to poke at a few public clouds I have accounts on too but seems like I recall using this before with them.
16:09:32 <eglute> ah ok. i will check and update
16:10:12 <markvoelker> hogepodge you mentioned using compter-servers-create-multiple fairly regularly, right?  Mostly on a private setup though?
16:10:23 <hogepodge> yeah, I use it in my ironic lab
16:10:30 <markvoelker> ah, ok
16:11:04 <hogepodge> I also use terraform a lot, but I don't know if they do it as a batch or as a loop
16:11:45 <markvoelker> So since eglute raised a concern about whether public clouds support this, I suggest taking a look around at a few and maybe pinging a few contacts we have.  Worst case, it'll be advisory and operators will shout then so we can strike it before it becomes required.
16:12:04 <eglute> sounds good to me
16:13:32 <eglute> ok, since there are at least 3 of us here, would either markvoelker or hogepodge give a PTG summary?
16:13:41 <markvoelker> sure
16:14:01 <markvoelker> #link https://etherpad.openstack.org/p/InteropRefstackPTGDenver_2018 Denver PTG etherpad
16:14:24 <markvoelker> Most of the morning we used as a working session to clear our backlog of reviews and discuss a few outstanding items
16:14:43 <markvoelker> As a result, we merged several patches and submitted a few more follow-up patches which have also mostly merged
16:14:59 <eglute> thanks for that!
16:15:00 <markvoelker> These included some Cinder stuff, the most notable being the deprecation of the v2 API from the required list
16:15:25 <markvoelker> We also decided to target the November BoD meeting for the next guideline approval
16:15:43 <eglute> sounds good
16:16:00 <markvoelker> Once we decide on the create-compute-servers-multiple patch I'll likely get a patch ready to create the formal document
16:16:14 <eglute> thanks markvoelker
16:16:56 <markvoelker> Later in the day we also chatted with Georg about NFV stuff, and I chatted again with Ildiko over breakfast the next day
16:17:33 <markvoelker> Neither Georg or I have had as much time as we'd have liked to work on an NFV program, and we're not currently seeing a lot of noise about from the community side either
16:17:53 <eglute> ok. any other take aways from NFV discussions?
16:18:32 <markvoelker> We are seeing, for example, a lot of telco requirements that have a lot of similarities and could be simplified if we could check a box stating compliance with an interop program, so feels like there's still some demand out there
16:18:59 <markvoelker> We discussed that maybe a formal interop program isn't the way to get started though
16:19:50 <markvoelker> Perhaps, for example, we might do some sort of event (imagine, for example, a sort of interop challenge where the workload was a VNF like Clearwater or some such) to drum up interest, then see who's willing to put some further work in
16:20:10 <eglute> ok, so pause on the work on formal NFV program for now?
16:20:19 <eglute> i do like the idea of an event
16:20:30 <markvoelker> How to go about this would likely be a discussion we need to have with the Foundation folks and/or BoD, so we may want to talk about it at the November meeting
16:21:03 <eglute> ok, that sounds good
16:21:05 <eglute> thank you
16:21:12 <markvoelker> We also chatted a bit about some different stakeholders that we havent' spent much time with so far...for example, VNF vendors
16:21:22 <eglute> anything else from the PTG?
16:22:01 <markvoelker> I think that's about it...we also chatted with gema a bit about ARM and some questions around image interop
16:22:39 <eglute> what were the image interop questions?
16:23:14 <hogepodge> can you call clouds interoperable if they can't boot the same image (arm vs x86)
16:23:24 <markvoelker> Just about whether images themselves ought to be part of an interop test...e.g. if I have an x86 cloud, the cirros image I boot on it wont' work on my ARM cloud
16:23:38 <markvoelker> But the API to upload it, boot a server from it, etc do work
16:23:48 <markvoelker> Came down to a question of API vs the data you feed to it
16:23:58 <eglute> right, that makes sense
16:24:02 <hogepodge> my opinion is that any api that takes bad data will barf. arm image to x86 cloud is bad data
16:24:17 <markvoelker> ++
16:24:37 <eglute> how many clouds are using arm? any idea?
16:24:48 <hogepodge> Two as far as I know so far.
16:24:52 <hogepodge> Linaro and Vexxhost
16:24:54 <markvoelker> Linaro and vexxhost
16:25:00 <markvoelker> *jinx
16:25:07 <hogepodge> Aww, now I can't speak
16:25:14 <markvoelker> =p
16:25:22 <eglute> hahah
16:25:25 <markvoelker> It's ok, it's IRC and you can still type
16:25:39 <hogepodge> #link http://wondermark.com/378/ jinx
16:26:05 <eglute> so how about VMs? they would face the same issue right, arm vs x86?
16:27:05 <markvoelker> Not sure I follow?
16:27:25 <hogepodge> kvm vs vmware vs xen, for example
16:27:41 <eglute> true... i guess we dont address that right now anyways
16:27:52 <hogepodge> We kind of covered that too. Discovery of expected image types is cloud specific, but there's no API to expose that implementation afaik
16:28:13 <hogepodge> So it's up to the operator to provide that information.
16:28:17 <markvoelker> Oh, ok....right.  YEah, same deal: the API's work, but you can feed any API parameters that don't make sense and things break.
16:28:26 <eglute> right
16:28:39 <markvoelker> We actually mentioned things like image conversion that work for the hypervisor case, fwiw.  Not so much for the processor architecture case.
16:29:06 <markvoelker> But the main point was that the API's are available and work and the tempest tests pass.
16:29:16 <persia> Regarding architecture: images are supposed to set architecture, and nova scheduler does the right thing.  When folk upload images without architecture hints, they are scheduled on the default architecture for the cloud.  Accidents should only happen for misconfigured cloudds.  Interop probably benefits from setting architecture on images to avoid the problem.
16:29:31 <hogepodge> This is an instance where clouds may behave differently. Some clouds do conversion in the background.
16:30:14 <hogepodge> Do tests exist in tempest to exercise that capability persia?
16:31:30 <persia> hogepodge: That I don't know.  Sadly, somewhere between Grizzly and Queens, we lost track of architecture, and all the docs stopped recommending setting it: I don't know if the tempest multiarch tests degraded.  For Rocky we now have the concept of a "default" architecture for a given cloud to work around that bug, but I don't know that any tempest tests were added to test that bit.
16:31:32 <eglute> i am doing migrations now from old openstack versions to newer ones, and in some cases have to rely on glance images working properly. however, have not had to deal with different architectures. other than that, i must say that openstack is mostly interoperable between different openstack providers/deployers/operators as far as i have seen
16:34:11 <eglute> i have not been setting architecture for images i dont think. I am not even sure you can pass architecture as a parameter on image create
16:34:47 <persia> Note that it *is* possible to construct an UEFI image that can boot for multiple architectures, but I don't think glance supports this (other than by not specifying architecture).
16:35:07 <persia> eglute: You very much can pass architecture on image storage in glance: it's one of the core properties.
16:35:44 <eglute> thanks persia i will need to look a little more into it
16:36:55 <eglute> persia i am not seeing it in the api. is that part of glance.conf, or something else? https://developer.openstack.org/api-ref/image/v2/index.html#images
16:36:57 <persia> eglute: Feel free to ask me (probably in #openstack-dev) if you have questions about it.
16:37:05 <eglute> thanks persia
16:37:35 <markvoelker> eglute: https://docs.openstack.org/glance/latest/admin/useful-image-properties.html
16:38:06 <hogepodge> On my queens cloud arch is required. I did have to set hypervisor type on flavors for baremetal. I wonder if defaults are x86 centric
16:38:18 <hogepodge> meant to say arch is not required
16:38:34 <eglute> thanks markvoelker
16:39:13 <persia> eglute: Look under "show detail" in GET /v2/schemas/images
16:39:40 <markvoelker> So let's see....past that I think about the only thing open is "make consistency job gating" patch that hogepodge was working on....
16:39:55 <markvoelker> #link https://review.openstack.org/#/c/601633/ Make consistency job gating
16:40:07 <eglute> thanks persia!! makes sense
16:40:20 <markvoelker> hogepodge:  just chugging along iterating on patchsets, looks like?
16:40:37 <eglute> thanks hogepodge for creating the patch
16:40:48 <hogepodge> Once this lands (I just sent an update to address the issues this morning), we can add to Tempest. Talked with gmann and they're ok with making it a job in their repo once it's running.
16:41:23 <eglute> great, thanks hogepodge
16:41:49 <hogepodge> hopefully I got my ansible/yaml variable syntax right
16:43:02 <eglute> hogepodge do you think there will be keystone or swift updates for the next guideline?
16:43:31 <hogepodge> I'm not expecting any, but I haven't given it the attention it deserves.
16:44:03 <eglute> ok... i expect no major changes, but would be good to check
16:44:55 <markvoelker> anything else today?
16:45:05 <eglute> markvoelker anything for heat?
16:45:52 <markvoelker> Nope, not expecting anything.  Chatted with Rico a bit in the hallway.  Since we're not doing resource validation this cycle there's not likely to be any changes.
16:46:07 <eglute> great thank you
16:46:27 <eglute> i think that is it then unless someone has anything else for today
16:47:01 <markvoelker> I should add: no changes in terms of new stuff being added, though we probably do need to promote some of the advisory stuff
16:47:17 <markvoelker> I have an AI to do that but didn't get to it before dashing home for the hurricane
16:47:23 <eglute> thanks
16:47:52 <eglute> markvoelker hopefully no more hurricanes for you this year
16:48:30 <eglute> thanks everyone!
16:48:32 <eglute> #endmeeting