14:00:07 <melwitt> #startmeeting nova
14:00:07 <openstack> Meeting started Thu Feb 22 14:00:07 2018 UTC and is due to finish in 60 minutes.  The chair is melwitt. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:08 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:10 <openstack> The meeting name has been set to 'nova'
14:00:14 <mriedem> o/
14:00:15 <hrw> o/
14:00:15 <tssurya> o/
14:00:22 <takashin> o/
14:00:23 <melwitt> howdy everyone
14:00:24 <efried> @/
14:00:31 <edleafe> \o
14:00:38 <bauzas> howdy ho
14:00:39 <edmondsw> o/
14:00:48 <melwitt> #topic Release News
14:00:57 <melwitt> #link Queens release schedule: https://wiki.openstack.org/wiki/Nova/Queens_Release_Schedule
14:01:04 <melwitt> #info Queens RC2 was released on 2018-02-15: https://review.openstack.org/#/c/545200
14:01:28 <melwitt> we're doing an RC3 today, and today is the last day for RCs. everything we need has merged as of this morning to stable/queens
14:01:36 <melwitt> #link Queens RC3 TODOs: https://etherpad.openstack.org/p/nova-queens-release-candidate-todo
14:01:36 <mriedem> nope
14:01:43 <mriedem> https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens+label:Code-Review=2
14:01:45 <melwitt> wat? I just checked
14:02:04 <melwitt> oh, dangit. I guess I had my search query wrong
14:02:05 <melwitt> sorry
14:02:17 <melwitt> okay, two more things left. then we can propose the release tag for RC3
14:02:44 <melwitt> #link Rocky release schedule: https://wiki.openstack.org/wiki/Nova/Rocky_Release_Schedule
14:02:59 <melwitt> the PTG is next week, so we will be kicking off the rocky cycle then
14:03:22 <melwitt> any comments or questions on release news?
14:03:38 <melwitt> #topic Bugs (stuck/critical)
14:03:57 <melwitt> #link queens-rc-potential bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=queens-rc-potential
14:04:07 <melwitt> none open anymore because everything has merged to master
14:04:32 <melwitt> Critical bugs ... none open
14:04:43 <melwitt> #info 26 new untriaged bugs (down 6 since the last meeting)
14:04:46 <bauzas> nope
14:04:48 <bauzas> 22 :p
14:04:52 <melwitt> as of last night when I made the query
14:04:55 <melwitt> I just woke up
14:04:58 <bauzas> heh
14:05:07 <bauzas> I should amend that before the meeting
14:05:18 <melwitt> untriaged count has been going down so thanks everyone for helping with that
14:05:27 <melwitt> #link bug triage how-to: https://wiki.openstack.org/wiki/Nova/BugTriage#Tags
14:05:52 <melwitt> how to triage for anyone newer folks who'd like to help ^
14:05:58 <melwitt> #link untagged untriaged bugs: https://bugs.launchpad.net/nova/+bugs?field.tag=-*&field.status%3Alist=NEW
14:06:10 <bauzas> well, also setting the new bugs could be nice :)
14:06:36 <johnthetubaguy> what do you mean by "setting the new bugs"?
14:06:36 <bauzas> I saw some of them tagged, but not having been set :)
14:06:36 <melwitt> what do you mean by setting?
14:06:48 <bauzas> either confirmating or closing
14:06:51 <melwitt> oh yes. tagging is the first step
14:06:57 <bauzas> or invaling or asking for mure
14:07:05 <bauzas> anyway
14:07:06 <johnthetubaguy> ah...
14:07:15 <melwitt> only 4 untriaged untagged bugs, which is great. thanks all for tagging bugs with a category
14:07:16 <bauzas> I'll try to have 0 new bugs by the next weeks
14:07:29 <melwitt> #help tag untagged untriaged bugs with appropriate tags to categorize them
14:07:35 <melwitt> #help consider signing up as a bug tag owner and help determine the validity and severity of bugs with your tag
14:07:44 <mriedem> one thing here,
14:07:46 <mriedem> on bugs,
14:08:04 <mriedem> we used to have the bug fix patch list or whatever, for not super hard, pretty straight-forward fixes,
14:08:07 <mriedem> etherpad,
14:08:29 <mriedem> are we going to do that again? because triage is fine, but you know, we should actually fix these damn things
14:08:33 <efried> is "low hanging fruit" the phrase you're looking for?
14:08:38 <mriedem> sort of,
14:08:43 <mriedem> not all fixes are created equal
14:08:49 <melwitt> yeah, I've been thinking about it. it was the "trivial patch monkey" list
14:09:04 <mriedem> sure, something with a less offensive name
14:09:10 <efried> Would be a good section for the Etherpad Of Mel
14:09:11 <bauzas> right
14:09:13 <melwitt> it's still on the priorities etherpad, so it would be a matter of refreshing that and re-sending a mail to the ML to explain it again
14:09:24 * efried says in his best Gandalf voice
14:09:41 <mriedem> alright
14:09:41 <melwitt> #link review priorities etherpad https://etherpad.openstack.org/p/rocky-nova-priorities-tracking
14:09:42 <edleafe> "Trivial Patch Kitten"?
14:09:50 <mriedem> that's better
14:09:53 <melwitt> haha
14:10:12 <melwitt> yeah, agreed the old name wasn't great
14:10:16 <efried> ...list of "ball o' yarn" patches.
14:10:41 <melwitt> okay. so I'll get that fixed up on the etherpad and send a mail out about it with instructions and all
14:10:42 <bauzas> we have a Gerrit call AFAIK
14:10:49 <bauzas> I mean a Gerrit query
14:10:56 <bauzas> for trivial patches
14:11:08 <bauzas> but trivial means something different
14:11:11 <bauzas> small
14:11:34 <melwitt> yeah. there are a few gerrit dashboard queries linked at the bottom of the priorities etherpad that are helpful
14:11:57 <melwitt> anything else on bugs?
14:12:25 <melwitt> er, wait, gate status is in here too. sorry
14:12:31 <melwitt> #link check queue gate status http://status.openstack.org/elastic-recheck/index.html
14:12:44 <melwitt> gate status has been pretty smooth AFAICT
14:12:52 <melwitt> #link 3rd party CI status http://ci-watch.tintri.com/project?project=nova&time=7+days
14:12:56 <jaypipes> melwitt: sshhh, don't jinx it.
14:13:10 <melwitt> yeah, good point. don't listen to what I said, gate
14:13:12 <mriedem> grenade was broken on all branches last night
14:13:17 <mriedem> but fixes are merged
14:13:32 <melwitt> yeah, grenade on stable broke because of branchless tempestness
14:13:51 <melwitt> mnaser was on it quickly though and proposed fixes for that
14:14:00 <melwitt> those are merged yesterday night
14:14:08 <jaypipes> all hail mnaser.
14:14:23 <melwitt> yes, shout out to mnaser, thanks
14:14:27 <mriedem> lasterday?
14:14:38 <melwitt> yeah, heh
14:14:48 <jaypipes> #thanks mnaser for being on top of the tempest gate fixes!
14:14:48 <hrw> speaking of 3rdparty CI. we (Linaro) work on adding aarch64 boxes to openstack infra
14:15:05 <jaypipes> #thankyou mnaser for being on top of the tempest gate fixes!
14:15:15 <melwitt> hm, maybe the bot isn't in here
14:15:20 <jaypipes> damn it, what is the incantation...
14:15:37 <melwitt> I thought it was just #thanks
14:15:51 <melwitt> hrw: sounds cool, thanks for the heads up
14:16:24 <melwitt> I've noticed hyper-v, zVM and quobyte CIs seem to be failing pretty often
14:16:29 <mnaser> :)
14:17:10 <melwitt> so that's a TODO to look into those
14:17:22 <melwitt> and notify owners thusly
14:17:32 <melwitt> #topic Reminders
14:17:39 <melwitt> #info Rocky PTG is next week, there will be no Nova meeting next week
14:17:46 <melwitt> #link Rocky PTG topics: https://etherpad.openstack.org/p/nova-ptg-rocky
14:18:08 <melwitt> I've updated the schedule/agenda on ^ with times so it should be pretty detailed now
14:18:47 <melwitt> any questions about the PTG?
14:19:06 <melwitt> #topic Stable branch status
14:19:15 <melwitt> #link stable/queens: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens,n,z
14:19:25 <melwitt> #info working on Queens RC3: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/queens+label:Code-Review=2
14:19:32 <melwitt> as mentioned earlier ^
14:19:37 <lyarwood> stable/ocata looks blocked by a legacy-tempest-dsvm-multinode-live-migration failure, just getting around to looking at this now
14:20:00 <melwitt> lyarwood: a-ha, cool. thanks for looking into that
14:20:14 <melwitt> #link stable/pike: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/pike,n,z
14:20:42 <melwitt> things on pike need to be rechecked because of the grenade job issue from yesterday
14:20:51 <melwitt> #link stable/ocata: https://review.openstack.org/#/q/status:open+project:openstack/nova+branch:stable/ocata,n,z
14:20:56 <melwitt> #info ocata is blocked; need to remove grenade jobs: https://review.openstack.org/#/q/status:open+topic:neutron-grenade-ocata
14:21:11 <mriedem> branch eol + grenade + zuulv3 is turning out to be a new headache
14:21:12 <melwitt> oh, there ^ (thanks for adding that)
14:22:08 <melwitt> okay, anything else on stable branch status?
14:22:19 <melwitt> #topic Subteam Highlights
14:22:37 <melwitt> mriedem: cells v2?
14:22:48 <mriedem> there was a meeting, you and i were there,
14:22:54 <mriedem> tssurya has patches up for bug fixes
14:22:59 <mriedem> cern has a spec or 5
14:23:10 <mriedem> more info to come at the ptg
14:23:26 <melwitt> yep, thanks
14:23:29 <melwitt> edleafe: scheduler?
14:23:32 <edleafe> Started to discuss which placement effort should be prioritized for Rocky (NUMA affinity, aggregates, update_provider_tree, etc).
14:23:35 <edleafe> Decided that if keystone is down, returning a 503 with a clear explanation was the best response.
14:23:38 <edleafe> No meeting next week (PTG), and need someone to run the following week, as edleafe will be in a metal tube then.
14:23:41 <edleafe> that's it
14:23:56 <melwitt> cool, thanks
14:24:05 <jaypipes> edleafe: efried and I can run it.
14:24:18 <efried> I'll be out the week after the PTG
14:24:19 <edleafe> jaypipes: cool, I'll let you two fight over it
14:24:25 <jaypipes> ok, I'll run it :)
14:24:32 <efried> Wife and I sticking around Dublin
14:24:34 <edleafe> guess it'll be an easy fight for jaypipes to win
14:24:59 <melwitt> I'll be out the week after the PTG too, gibi will run the nova meeting on Mar 8
14:25:05 * jaypipes lines up all controversial decisions for Monday after next...
14:25:23 <melwitt> lol, there ya go
14:25:33 <melwitt> and speaking of, gibi left a note on the notifications subteam
14:25:38 <melwitt> Notification (gibi) - no meetings until after the PTG
14:25:50 <melwitt> anything else on subteam news?
14:26:03 <melwitt> #topic Stuck Reviews
14:26:23 <melwitt> one item on the agenda: Move Contrail VIF plugging to privsep broke Contrail TAP plugging https://bugs.launchpad.net/nova/+bug/1742963
14:26:23 <openstack> Launchpad bug 1742963 in OpenStack Compute (nova) queens "Cannot boot VM with Contrail SDN controller" [High,Fix committed] - Assigned to Matt Riedemann (mriedem)
14:26:31 <mriedem> that's fixed
14:26:33 <jaypipes> yep
14:26:43 <jaypipes> merged last nighgt
14:26:53 <melwitt> yep, coolness
14:26:54 <melwitt> mail thread http://lists.openstack.org/pipermail/openstack-dev/2018-February/127601.html
14:27:11 <melwitt> does anyone else have any stuck reviews?
14:27:36 <melwitt> #topic Open discussion
14:27:41 <melwitt> one item on the agenda
14:27:43 <jaypipes> in doing the review on that one, I checked and made sure that was the only vif driver (still in nova) that was passing an Instance object. it was. all others pass component pieces so they're good.
14:27:44 * hrw /me
14:27:47 <Rambo> Hello,everyone, Can you help me to review the spec about rebuild a instance booted from volume?
14:27:47 <Rambo> The link is https://review.openstack.org/#/c/532407/
14:27:47 <Rambo> Thank you very much.
14:27:47 <Rambo> Any suggestion is welcome.
14:28:03 <melwitt> jaypipes: a-ha, cool. thanks
14:28:32 <melwitt> okay, so first hrw has a specless bp to highlight https://blueprints.launchpad.net/nova/+spec/configure-amount-of-pcie-ports
14:28:38 <hrw> o yes
14:28:48 <hrw> x86/q35 and aarch64/virt thing
14:29:04 * jaypipes has already reviewd the code.
14:29:28 <hrw> in short: we want to have hotplug on pcie machines. so we add more slots to have hotplug working.
14:30:04 <hrw> contrary to x86/i440fx where slots just exist on pcie machines they need to be added into VM first
14:30:16 <bauzas> I reviewed the code
14:30:20 <jaypipes> hrw: last back and forth between us, you said "I will do some testing and then rewrite whole comment section."
14:30:26 <bauzas> and I'm fine with a specless BP unless someone objects
14:30:28 <jaypipes> hrw: but I don't see you have done that yet?
14:30:39 <hrw> jaypipes: had some other stuff to do first
14:30:54 <jaypipes> hrw: no prob. was just checking to see if maybe you just hadn't pushed.
14:31:02 <Rambo> I am really interested in this,Can you give me some advice?Thank you very much.
14:31:04 <hrw> jaypipes: probably will do that during ptg
14:31:08 <bauzas> hrw's question for that meeting was whether we should ask for a spec or not
14:31:16 <jaypipes> hrw: rock on.
14:31:25 <melwitt> Rambo: your item is next
14:31:26 <bauzas> not about any specific implementation detail
14:31:45 <mriedem> this doesn't seem spec worthy
14:31:45 <hrw> bauzas: thanks for using proper wording
14:31:51 <bauzas> melwitt: so, you agree on not requiring a spec ?
14:31:53 <mriedem> it's feature parity for the libvirt driver on aarch64 yeah?
14:32:03 <hrw> mriedem: aarch64 and x86/q35
14:32:07 <bauzas> yeah
14:32:13 <mriedem> q35 the infiniti model?
14:32:15 <mriedem> :P
14:32:21 <hrw> mriedem: close ;D
14:32:26 <bauzas> given it's about providing feature parity, I asked at least for a blueprint
14:32:38 <hrw> https://blueprints.launchpad.net/nova/+spec/configure-amount-of-pcie-ports is started
14:32:48 <melwitt> yeah, it's nice to have the blueprint to track the change
14:32:48 <hrw> pending approval still
14:32:49 <mriedem> specless bp seems fine
14:32:53 <bauzas> +1
14:33:00 <melwitt> okay, sounds cool, so we'll approve that blueprint
14:33:13 <hrw> there are some workitems there and links with info
14:33:37 <melwitt> great, thanks hrw
14:33:42 <jaypipes> "Now nova rebuild the volume-backed instance as make it a fast failure in the API." <-- I'm not actually sure what that means.
14:33:46 <hrw> not my first BP :D
14:33:53 <jaypipes> Rambo: ^^
14:34:03 <mriedem> jaypipes: we made a change in queens to fail in the api if you're rebuildinga volume-backed instance,
14:34:17 <mriedem> jaypipes: whereas before (for all time), we'd just silently not rebuild the disk on the compute but tell you we did
14:34:31 <Rambo> yes ,as mriedem
14:34:48 <jaypipes> mriedem: how does that first sentence relate to the following sentence? "But we have been offering volume-backed instance for more than 4 years and our users love it."
14:35:01 <mriedem> their users love volume-backed instances,
14:35:04 <mriedem> and presumably rebuild
14:35:06 <mriedem> but those 2 don't mix
14:35:27 <mriedem> i assume their users didn't know they were doing rebuilds that didn't actually work
14:35:33 <jaypipes> what exactly is rebuild of a volume-backed server? :(
14:35:35 <melwitt> ah, I see
14:35:51 <mriedem> rebuild the volume-backed root disk with the new image
14:35:53 <mriedem> i guess
14:35:55 <mriedem> i should note,
14:36:07 <mriedem> well, need to look at something
14:36:34 <mriedem> ok yeah
14:36:42 <mriedem> the API only fails if you're doing volume-backed server rebuild + a new image
14:36:48 <mriedem> if the image doesn't change, you can do volume-backed rebuild
14:36:56 <melwitt> oh, interesting
14:37:05 <mriedem> the problem is on the compute we don't do a new root disk volume with the new image
14:37:21 <jaypipes> I still don't "get" what exactly a "rebuild" is when the server is BFV,.
14:37:29 <bauzas> so, maybe it's just a documentation issue ?
14:37:36 <mriedem> https://review.openstack.org/#/c/520660/
14:37:40 <mriedem> we updated the docs
14:37:52 <jaypipes> mriedem: if it gets a new root disk, it's not a "rebuild", now is it?
14:38:28 <mriedem> it's a rebuild b/c you can hold onto your ports and volumes
14:38:33 <mriedem> but yes you can change the image
14:38:39 <mriedem> at least for ephemeral backed root disk
14:38:49 <jaypipes> ohffs
14:39:09 <mriedem> https://developer.openstack.org/api-ref/compute/#rebuild-server-rebuild-action
14:39:22 <bauzas> yeah so you would be turning your BFV instance into a normal instance, right?
14:39:24 <mriedem> there is a note in the api ref
14:39:27 <bauzas> excellent
14:39:29 <bauzas> ta
14:39:33 <mriedem> bauzas: no
14:39:36 <mriedem> we don't drop the volume
14:39:43 <bauzas> a-ha
14:40:00 <bauzas> but the volume is no longer attached then ?
14:40:25 <bauzas> FWIW, the api doc LGTM
14:40:35 <mriedem> the volume is attached
14:40:47 <mriedem> it's just that compute doesn't change the image in the root disk backed by the volume
14:40:52 <mriedem> i.e. we don't build a new volume for the root disk
14:40:55 <bauzas> got it, just read the bug report
14:41:17 <mriedem> so this would require, i think,
14:41:25 <mriedem> nova creating a new image-backed volume (like bfv today),
14:41:38 <mriedem> then replace the root bdm with that new one, before calling spawn
14:41:56 <lyarwood> or a new cinder api to recreate the volume?
14:42:18 <mriedem> lyarwood: i don't know what cinder has today that can do that, or if we'd even want that
14:42:26 <mriedem> lyarwood: you're asking for rebuild in cinder i think :)
14:42:26 <bauzas> call me stupid, but why couldn't we assume that if we rebuild using a glance image, then the instance won't be longer volume-backed ?
14:42:30 <lyarwood> mriedem: I don't think it has anything thus the new part
14:42:42 <lyarwood> mriedem: right
14:43:02 <bauzas> because I don't really like the idea of nova orchestrating a volume creation and a swap
14:43:05 <mriedem> bauzas: rebuild goes through _prep_block_devices just like normal create
14:43:54 <mriedem> anywho, this is why it hasn't been done
14:44:15 <hrw> uf. when I read your discussion I know that nova is not a project where I can go for core
14:44:50 <bauzas> mriedem: looks like it's a big ask then
14:45:00 * jaypipes is going to stay out of the volume-backed rebuild conversation, for his own sanity.
14:45:12 <bauzas> jaypipes: heh, you're so right
14:45:12 <mriedem> bauzas: maybe
14:45:34 <mriedem> there is something about shelve + bfv that is bonkers too
14:45:45 <mriedem> basically bfv was bolted on and very little else accounted for it
14:45:51 <mriedem> so now we have all of these gaps
14:46:04 * melwitt nods
14:46:08 <bauzas> yup, we have a shit list of instance actions that don't really map correctly with BFV instances
14:46:15 <mriedem> so we can either fill those gaps so we have a reasonable system where you can create a server and do things with it,
14:46:26 <mriedem> or we just ignore it and say "too complicated, don't want to think about it'
14:46:51 <mriedem> tbc, i'm not advocating for this
14:47:00 <bauzas> me neither
14:47:12 <bauzas> but we at least need to properly document the gaps
14:47:13 <mriedem> i just see lots of very old bugs for this same kind of thing and from a user pov, it has to be frustrating
14:47:21 <bauzas> before someone magically fix all the bonkers
14:47:27 <mriedem> yeah it's doc'ed
14:47:30 <mriedem> as of queens
14:47:34 <bauzas> so
14:47:37 <mriedem> what 6 years after bfv was added ?
14:47:55 <mriedem> anyway, /me gets off the soapbox
14:48:02 <bauzas> now we know we are bad, probably discussing it at the PTG is the next step ?
14:48:12 <bauzas> ie. the 5 stages of grief
14:48:28 <mriedem> heh, acceptance?
14:48:33 * johnthetubaguy wonders about the cinder storage backend discussions again
14:48:45 <mriedem> johnthetubaguy: never going to happen
14:48:56 <johnthetubaguy> mriedem: I think you are right
14:49:01 <mriedem> bauzas: if someone wants to throw it on the etherpad , whatever, but i don't think Rambo is going to be at the PTG
14:49:34 <bauzas> I'm not particularly expert in that, but I can at least kick an item in the etherpad and leave the battle happen at the PTG
14:49:58 <bauzas> not expert in *BFV instances*
14:50:10 <mriedem> i am now, unfortunately
14:50:25 <mriedem> the wily bdm turducken
14:50:29 <bauzas> I'm all open to get more knowledge :)
14:50:35 <melwitt> okay, so we can discuss it at the PTG and then also be sure to add comments to the spec review https://review.openstack.org/#/c/532407
14:50:56 <mriedem> Rambo: you brought this up in the mailing list before right?
14:51:00 <bauzas> Rambo: do you confirm mriedem's assumption that you won't be able to attend the PTG ?
14:51:04 <Rambo> yes,I am a stutent in China,Thanks for your understanding.
14:51:06 <Rambo> yes
14:51:20 <mriedem> yes i see it
14:51:21 <Rambo> about a month ago
14:51:21 <mriedem> no replies
14:52:25 <edmondsw> link?
14:52:27 <mriedem> actually it was mgagne that said their users love it
14:52:28 <mriedem> "I do agree that being able to rebuild a volume-backed instance would be a great addition. We have been offering volume-backed instance for more than 4 years and our users love it."
14:52:54 <hrw> one thing...
14:53:02 <mriedem> http://lists.openstack.org/pipermail/openstack-dev/2017-December/125223.html
14:53:05 <mriedem> edmondsw: ^
14:53:07 <edmondsw> tx
14:53:13 <mriedem> so it sounds like mgagne's users would like it
14:53:17 <hrw> https://bugs.launchpad.net/python-novaclient/+bug/1743964
14:53:17 <openstack> Launchpad bug 1743964 in python-novaclient "server add floating ip fails with "AttributeError: add_floating_ip"" [Undecided,New]
14:53:36 <mriedem> hrw: known issue
14:53:45 <hrw> floating IP support got removed in pike/queens?
14:53:50 <mriedem> novaclient 10.0 dropped those python API bindings
14:53:52 <mriedem> queens
14:53:55 <mriedem> from novaclient
14:54:08 <melwitt> need to use openstackclient
14:54:10 <hrw> ok
14:54:16 <mriedem> i'll triage it
14:54:20 <mriedem> openstackclient fails
14:54:22 <mriedem> is the problem
14:54:23 <hrw> thanks
14:54:24 <bauzas> melwitt: do you plan to amend the etherpad or do you want me to do so ?
14:54:26 <mriedem> b/c osc uses novaclient python API bindings
14:54:41 <bauzas> melwitt: for the BFV rebuild thing
14:54:41 <melwitt> bauzas: it would be cool if you can add it
14:54:42 <mriedem> fwiw this is why novaclient 10.0 isn't in u-c for queens
14:54:49 <melwitt> oh
14:54:50 <bauzas> melwitt: roger.
14:54:59 <hrw> so for now workaround is novaclient 9.x, right?
14:55:01 <mriedem> https://bugs.launchpad.net/python-openstackclient/+bug/1745795
14:55:01 <openstack> Launchpad bug 1745795 in python-openstackclient ""openstack server remove floating ip" broken with python-novaclient 10.0.0" [Undecided,New]
14:55:09 <mriedem> hrw: yes per upper-constraints versions for queens
14:55:12 <melwitt> bauzas: thanks
14:55:12 <bauzas> hrw: workaround is Nova CLI :)
14:55:20 * bauzas jk
14:55:22 <mriedem> https://github.com/openstack/requirements/blob/stable/queens/upper-constraints.txt
14:55:29 <mriedem> hrw: no it's not
14:55:31 <mriedem> gdi,
14:55:33 <mriedem> people
14:55:39 <mriedem> *bauzas i mean
14:55:45 <mriedem> novaclient removed floating ip CLIs and API bindings,
14:55:49 <mriedem> osc uses novaclient API bindings
14:55:50 <mriedem> wihch are gone
14:55:54 <mriedem> so you can't use osc + novaclient >= 10.0
14:55:55 <mriedem> fin
14:56:04 <hrw> thx
14:56:11 <mriedem> see https://bugs.launchpad.net/python-openstackclient/+bug/1745795
14:56:11 <openstack> Launchpad bug 1745795 in python-openstackclient ""openstack server remove floating ip" broken with python-novaclient 10.0.0" [Undecided,New]
14:56:20 <mriedem> i sent a thing to the ML about this,
14:56:23 <mriedem> not sure how to fix osc
14:56:27 <mriedem> if yo'ure not using neutron anyway
14:56:55 <mriedem> #link http://lists.openstack.org/pipermail/openstack-dev/2018-January/126741.html
14:57:36 <mriedem> if you want to add a floating ip,
14:57:40 <mriedem> you assign it to the port that the instance is using
14:58:06 <mriedem> ala what i do in this demo https://www.youtube.com/watch?v=hZg6wqxdEHk
14:58:13 * mriedem shamelessly plugs
14:58:32 <melwitt> cool, thanks for all that info
14:58:57 <melwitt> Rambo: we'll discuss your spec at the PTG and then add comments to the spec review, thanks
14:59:11 <melwitt> one minute left, anything else?
14:59:21 <Rambo> Thank you very much,thanks
14:59:37 <melwitt> okay cool, thanks everyone
14:59:41 <melwitt> #endmeeting