14:00:29 <tellesnobrega> #startmeeting sahara
14:00:30 <openstack> Meeting started Thu Nov 23 14:00:29 2017 UTC and is due to finish in 60 minutes.  The chair is tellesnobrega. Information about MeetBot at http://wiki.debian.org/MeetBot.
14:00:32 <openstack> Useful Commands: #action #agreed #help #info #idea #link #topic #startvote.
14:00:34 <openstack> The meeting name has been set to 'sahara'
14:00:51 <jeremyfreudberg> o/
14:01:57 <tellesnobrega> waiting for show of hands
14:01:58 <tosky> o/
14:02:44 * tosky did not expect jeremyfreudberg around
14:02:51 <tosky> jeremyfreudberg: happy thanksgiving!
14:03:03 * tellesnobrega didn't expect him either
14:03:07 <tellesnobrega> happy thanksgiving
14:03:36 <jeremyfreudberg> tosky, tellesnobrega, thanks! it's still early here so i can squeeze it in before festivities
14:03:41 <tellesnobrega> jeremyfreudberg is probably working up an appetite for a big turkey later
14:03:47 <jeremyfreudberg> indeed
14:04:01 <tellesnobrega> lets start
14:04:05 <tellesnobrega> #topic News/Updates
14:04:45 <tosky> sooo, the status of the zuul v3 migration is a bit stuck, but not bad
14:05:32 <tosky> there is a relevant change that I'm waiting for before merging the sahara-tests jobs (if you are interested zuul._projects renamed as zuul.projects, dict and not list anymore)
14:06:13 <tosky> the other blockers are a) lack of support for multinode devstack jobs (infra didn't work on it yet)
14:06:17 <tosky> b) missing grenade job (but the old one can be used)
14:07:11 <tosky> c) undergoing refactoring of the way jobs publishes the results (I should reask), it blocks the refactoring of sahara-extra jobs (and the change to publish oozie)
14:07:14 <tosky> and that's it for now
14:08:40 <tellesnobrega> I'm currently working on two main things. MapR image generation and decommission of an specific node. The first one I'm hitting an issue to install openjdk-8, if fails on the check for some reason. The second, it is not as simple as expected, I need to update the heat stack created by us (services.heat.ClusterStack), I will probably need to add an option to give a list of instances, right now it just receives the amount of instances o
14:08:40 <tellesnobrega> n each node group
14:09:28 <jeremyfreudberg> not much new to report from me, I haven't had time to revise the S3 job binary patch yet or fix the thing where s3_hadoop in SIE is breaking stuff
14:09:31 <jeremyfreudberg> I'll get to it soon...
14:09:38 <jeremyfreudberg> tellesnobrega, can you explain the openjdk thing?
14:09:38 <tellesnobrega> SotK, the decommission on the heat side always deletes the last instance created
14:10:47 <tellesnobrega> sure, basically I'm trying to deploy openjdk with package: java-1.8.0-openjdk-devel
14:11:08 <tellesnobrega> just like I did for CDH, but it fails during the rpm -q java-1.8.0-openjdk-devel
14:11:25 <tellesnobrega> I'm not sure why. The error that shows up is runtime error sh:
14:11:29 <tellesnobrega> that is it :(
14:11:40 <tellesnobrega> SotK, I'm trying to install with a script
14:11:41 <tosky> wut
14:12:12 <tosky> that's the time when I start putting print() around if I'm too lazy to go with a debugger
14:13:11 <tellesnobrega> I did, I even started a python shell and launched guestfs with the image to test it
14:13:55 <tellesnobrega> rpm -q really fails, but it installs with yum -y install, and after that rpm -q works. I'm not sure why it fails on the first time
14:14:25 <jeremyfreudberg> do other rpm commands work? (other options besides -q, -q with other packages) maybe your image is corrupted or something? not much to go on
14:14:48 <tellesnobrega> jeremyfreudberg, I do install a lot of packages
14:15:26 <tellesnobrega> they all work on the same way, it tests if it is install with rpm -q and if it is not installed it runs yum -y install
14:15:45 <jeremyfreudberg> very strange
14:16:33 <tellesnobrega> there are others packages that don't install, but I took them out for now. I'm working on the most important ones now and see if works and I will put the others back
14:17:33 <tellesnobrega> Do any of you have had experience with heat stacks?
14:18:24 <jeremyfreudberg> tellesnobrega, a little bit.
14:18:53 <tellesnobrega> jeremyfreudberg, does it make sense what I said about adding instance list?
14:19:06 <tosky> I wonder how tripleo manages the addition/removal of nodes
14:19:55 <tellesnobrega> tosky, hmm, I could ask around to see how they do that, we can work the same logic
14:22:02 <jeremyfreudberg> tellesnobrega, i think it makes sense
14:22:35 <jeremyfreudberg> btw, tellesnobrega, I might be looking in wrong place, but for your image packing issue, doesn't stderr not get shown (only stdout) https://github.com/openstack/sahara/blob/7bf5ed49bb2a16bf36b9fc54fa78bc28f5d85ffb/sahara/cli/image_pack/api.py#L88
14:22:55 <jeremyfreudberg> i might be in the wrong part of the code there
14:23:45 <tellesnobrega> I have to make sure that the template change won't break the way cluster stack works now
14:24:58 <tellesnobrega> jeremyfreudberg, that is only for getting the proper output message. It goes into exception there and it should raise the message, but it is empty. But yeah, I can try adding stderr there to see if anything else comes up
14:25:17 <tellesnobrega> when I did manually the message was the same, RuntimeException sh:
14:27:22 <tellesnobrega> oh well, I will keep digging on those two. tosky thanks for the tripleo idea, I will take a look
14:29:13 <tellesnobrega> #topic Open Discussion
14:30:08 <tosky> there are few reviews for the stable branches: one is a old backport for ocata, two are new fixes for tox_install.sh
14:30:21 <tellesnobrega> I guess I asked all my questions during news/updates
14:30:32 <tellesnobrega> tosky, I will take care of them
14:31:09 <jeremyfreudberg> I might have mentioned this before but there is SAHARA_AUTO_IP_ALLOCATION_ENABLED for dashboard config, this exists because of some nova network detail, *however* it also has an accidental Neutron use case
14:31:10 <jeremyfreudberg> namely,
14:31:29 <jeremyfreudberg> that if you have use_floating_ips=False it's a very easy way to hide that field from the user in dashboard
14:31:49 <jeremyfreudberg> I'm reminded of this because of recent patch to remove use_neutron
14:32:03 <shuyingya> Sorry for the late. I am in business travel this two weeks
14:32:19 <jeremyfreudberg> My question is, does it seem like a legitimate use case? And is it ok to keep the misleading name?
14:32:24 <jeremyfreudberg> hi shuyingya
14:32:50 <shuyingya> Hi. Sorry for the interruption
14:33:07 <shuyingya> Please go on
14:34:25 <jeremyfreudberg> thanks, i was basically done with my point. But to put it again succinctly, SAHARA_AUTO_IP_ALLOCATION_ENABLED dashboard config is for nova-network, but actually is kindof useful for neutron too in a kind of accidental way. Is that ok / should we rename parameter?
14:35:20 <tellesnobrega> jeremyfreudberg, best way to go would be to make the property more clear
14:35:51 <tellesnobrega> misleading property name are always a source of trouble
14:36:36 <tellesnobrega> do you have a better name suggestion?
14:37:06 <tosky> at most we can still use SAHARA_AUTO_IP_ALLOCATION_ENABLED, if set, to set new variable, and then kill it in some future version
14:37:29 <jeremyfreudberg> tosky, right, I guess it's not such an atomic option as I paint it to be
14:37:31 <shuyingya> I think this name doesn’t relate to nova or neutron
14:37:47 <shuyingya> It is ok to still use it
14:38:10 <jeremyfreudberg> shuyingya, my problem was that "Auto IP" is a nova-network concept, I think
14:38:13 <jeremyfreudberg> but I could be wrong
14:38:40 <tellesnobrega> not sure either
14:38:42 <jeremyfreudberg> this is the official help text for the option, btw, we should at least update that https://github.com/openstack/horizon/blob/9adb63643778a779c571b4898b315b582bf8fba8/openstack_dashboard/local/local_settings.py.example#L791
14:38:51 <jeremyfreudberg> doc update as well
14:39:54 <jeremyfreudberg> or actually, I guess it's a bit confusing because "Auto IP" can mean Sahara auto ip (`use_floating_ips`), or Nova-net `auto_assign_floating_ip`
14:40:12 <tellesnobrega> it is a bit confusing
14:40:46 <tosky> I agree with a transition to less confusing names
14:40:57 <tosky> or we will have the same talk every 6 months :)
14:41:14 <tellesnobrega> true
14:41:24 <shuyingya> I am confusing too. If it is related to the concept of nova or neutron, renaming it would be helpful
14:41:43 <jeremyfreudberg> so, I propose to keep this config option alive, and rename it to something like SAHARA_FLOATING_IPS_ENABLED or something like that
14:42:15 <tellesnobrega> sounds good
14:42:36 <jeremyfreudberg> and we will follow tosky advice about deprecate the old option but not remove it quite yet
14:42:47 <tellesnobrega> yeah
14:42:57 <tosky> yep
14:43:01 <jeremyfreudberg> great
14:43:07 <tellesnobrega> thanks jeremyfreudberg
14:43:24 <jeremyfreudberg> shuyingya, do you have any update?
14:43:29 <shuyingya> I would like to update my recently job
14:43:31 <shuyingya> Yep
14:45:02 <shuyingya> I am in business travel in headquarters to implement deploy sahara service in container
14:45:38 <tellesnobrega> cool
14:45:39 <shuyingya> Maybe I can update sahara charts in openstack-helm project
14:45:47 <tellesnobrega> how is that going?
14:46:20 <shuyingya> First, build sahara service image by Lola project
14:47:06 <shuyingya> And then use openstack-helm project to form the helm template for kuberntes helm
14:48:08 <tosky> I'm not following too much the container stuff; I know that TripleO uses the containers from Kolla, are openstack-helm and Lola different?
14:51:08 <tosky> shuyingya: maybe you missed my last message: how are openstack-helm and Lola related to Kolla, whose images are used by TripleO ?
14:51:11 <tellesnobrega> shuyingya, did you see tosky's question?
14:51:23 <shuyingya> Didn’t investigate triple O project now. But it seems same way to implement it
14:51:26 <shuyingya> Yes
14:52:05 <shuyingya> I am on the way to go back hotel
14:52:11 <tellesnobrega> shuyingya, cool. It seems like it is about the same thing done in two projects
14:52:39 <shuyingya> Sorry. I will update the detail next week
14:52:47 <tellesnobrega> no worries, thanks shuyingya
14:52:52 <jeremyfreudberg> thanks shuyingya
14:53:05 <shuyingya> You are welcome
14:54:24 <tellesnobrega> we are 6 minutes away, do we have any other discussion topics for today?
14:54:34 <jeremyfreudberg> yes
14:54:37 <jeremyfreudberg> one more (quick?) thing
14:54:45 <jeremyfreudberg> about apiv2, actually
14:54:46 <tellesnobrega> go ahead
14:54:52 <shuyingya> Too many excited things want to share with you
14:55:10 <shuyingya> We can share them next week
14:55:24 <shuyingya> :)
14:55:28 <tellesnobrega> :)
14:55:36 <jeremyfreudberg> v1 and v1.1 had project id in endpoint url in catalog, but v2 does not
14:55:42 <jeremyfreudberg> so i don't think we can use the same service type
14:55:55 <jeremyfreudberg> this is for service discovery, I mean
14:56:18 <tellesnobrega> hm
14:57:11 <jeremyfreudberg> not sure what best practice is here
14:57:17 <tosky> I don't really know, it requires some investigation (if other projects did it)
14:57:24 <jeremyfreudberg> with a smart enough client anything is possible
14:57:41 <shuyingya> I would like to investigate nova implementations first
14:58:12 <tellesnobrega> shuyingya, do you want to investigate that and update us next week?
14:58:30 <shuyingya> Sure
14:58:33 <jeremyfreudberg> mistral and cinder both got new workflowv2 and volumev2 at some point, we should find out what their motiviation for that was
14:58:51 <d0ugal> Mistral copied cinder :)
14:59:09 <jeremyfreudberg> d0ugal, thx
14:59:15 <tellesnobrega> thank d0ugal, lets find out cinder's motivation
14:59:17 <d0ugal> but it was a mistake and shouldn't be copied by anyone else AFAIK
14:59:18 <shuyingya> Thanks
14:59:36 <tellesnobrega> even better information d0ugal, thanks
14:59:44 <jeremyfreudberg> d0ugal, thx, again
14:59:50 <tosky> d0ugal: so v3 is still volumev2? I should know that I forgot
14:59:55 <tellesnobrega> we can check keystonev3 if they changed anything
15:00:14 <tosky> keystone is probably the special case that I wouldn't look at
15:00:18 <d0ugal> tosky: I'm not sure - I am not that familiar with cinder. I work on Mistral.
15:00:20 <jeremyfreudberg> yep
15:00:27 <jeremyfreudberg> btw, our meeting time is over
15:00:53 <tellesnobrega> it is
15:01:16 <tellesnobrega> thanks all
15:01:33 <tellesnobrega> see you all next week
15:01:46 <tellesnobrega> #endmeeting