frickler | slaweq: I did a bit of local testing myself. IMO the amount of changes that would be needed in devstack show what changes operators would have to go through and to me this shows what a bad idea that PTG decision might be | 07:10 |
---|---|---|
opendevreview | Jakob Meng proposed openstack/devstack master: (WIP) Hack to show bug on clean CentOS 8 Stream https://review.opendev.org/c/openstack/devstack/+/826813 | 07:41 |
slaweq | frickler: I don't think it's that bad. I have locally devstack deployed with those new settings already. Main changes which I had to do is basically "revert" of changes made earlier in patch https://review.opendev.org/c/openstack/devstack/+/797450 as now (again) project admin needs to be used instead of system admin for most of those things | 07:51 |
slaweq | there are some other issues there, but not that huge | 07:51 |
slaweq | Today I should send few patches to try it together in upstream ci | 07:51 |
*** bhagyashris_ is now known as bhagyashris | 08:01 | |
opendevreview | Martin Kopec proposed openstack/tempest master: Fix test_rebuild_server test by waiting for floating ip https://review.opendev.org/c/openstack/tempest/+/814085 | 08:02 |
*** jpena|off is now known as jpena | 08:13 | |
opendevreview | Slawek Kaplonski proposed openstack/devstack master: Fix deployment of Neutron with enforced scopes https://review.opendev.org/c/openstack/devstack/+/826851 | 08:53 |
opendevreview | Slawek Kaplonski proposed openstack/devstack master: Revert "Disable enforcing scopes in Neutron temporary" https://review.opendev.org/c/openstack/devstack/+/826852 | 08:53 |
*** bhagyashris_ is now known as bhagyashris | 08:53 | |
slaweq | frickler: ^^ lets wait for ci results there but I hope that will be enough to make it working again with scope enforcement in Neutron | 08:54 |
frickler | slaweq: ah, you kind of cheated and added a new cloud definition. that's exactly the point, doing that as a provider for thousands of customers is really a big thing. but for fixing devstack I think it is good enough for now | 09:52 |
frickler | kopecmartin: you were asking about newcomer tasks. one might to clean up the use of "openstack ... | grep | get_field " and replace with "openstack -f value -c COL", like seen e.g. here https://review.opendev.org/c/openstack/devstack/+/826851/1/lib/neutron_plugins/services/l3 | 10:21 |
kopecmartin | frickler: great, thanks, i'm gonna make a note of it in the priority etherpad | 10:23 |
kopecmartin | gmann: when you have a sec, can you please check this bug fix? https://review.opendev.org/c/openstack/tempest/+/814085 | 13:01 |
opendevreview | Merged openstack/tempest master: Revert "Check VM's console log before trying to SSH to it." https://review.opendev.org/c/openstack/tempest/+/817634 | 13:20 |
opendevreview | James Parker proposed openstack/whitebox-tempest-plugin master: Test resize with mem_page_size in flavor https://review.opendev.org/c/openstack/whitebox-tempest-plugin/+/824772 | 14:14 |
opendevreview | Dan Smith proposed openstack/grenade master: Add grenade-skip-level job https://review.opendev.org/c/openstack/grenade/+/826101 | 14:37 |
vkmc | o/ | 15:00 |
enriquetaso | o/ | 15:00 |
enriquetaso | \o | 15:00 |
vkmc | not sure how many people will join us today | 15:00 |
tbarron | \o/ | 15:00 |
carloss | o/ | 15:00 |
vkmc | we can try to start a meeting with the bot | 15:00 |
vkmc | !startmeeting devstack-cephadm | 15:00 |
opendevmeet | vkmc: Error: "startmeeting" is not a valid command. | 15:00 |
enriquetaso | oops | 15:00 |
vkmc | !start devstack-cephadm | 15:00 |
opendevmeet | vkmc: Error: "start" is not a valid command. | 15:00 |
vkmc | oh noes | 15:00 |
vkmc | #startmeeting devstack-cephadm | 15:01 |
opendevmeet | Meeting started Fri Jan 28 15:01:23 2022 UTC and is due to finish in 60 minutes. The chair is vkmc. Information about MeetBot at http://wiki.debian.org/MeetBot. | 15:01 |
opendevmeet | Useful Commands: #action #agreed #help #info #idea #link #topic #startvote. | 15:01 |
opendevmeet | The meeting name has been set to 'devstack_cephadm' | 15:01 |
fmount | o/ | 15:01 |
tosky | o/ | 15:01 |
vkmc | o/ | 15:02 |
vkmc | let's give people a couple of minutes to join | 15:02 |
enriquetaso | \o | 15:03 |
rosmaita | o/ | 15:03 |
carloss | o/ | 15:04 |
vkmc | #link https://etherpad.opendev.org/p/devstack-plugin-ceph | 15:04 |
vkmc | just set an etherpad to take notes, if needed | 15:04 |
vkmc | let's get started | 15:05 |
vkmc | #topic motivations | 15:05 |
vkmc | so, as mentioned in the mailing list thread | 15:05 |
*** fmount is now known as fpantano | 15:06 | |
vkmc | we (as in manila team) are aiming to implement a plugin for devstack to deploy a ceph cluster using the cephadm tool | 15:07 |
vkmc | this tool is developed and maintained by the ceph community | 15:07 |
vkmc | and it allows users to get specific ceph versions very easily and enforces good practices for ceph clusters | 15:08 |
fpantano | vkmc: yeah and it's a good idea as ceph-ansible is not used anymore (cephadm is the official deploy tool) | 15:08 |
sean-k-mooney | i would suggest privoting the curernt plugin instead of developing a new one | 15:08 |
vkmc | agree :) | 15:08 |
fpantano | and there's also the orchestrator managing the ceph containers, so it's not just a matter of deploy | 15:09 |
sean-k-mooney | so tha twe only have to test with one way of deploying ceph in the gate | 15:09 |
tosky | because everyone else needs it, as this is the new way for deploying ceph | 15:09 |
tosky | and we don't need to change all the jobs | 15:09 |
vkmc | all good reasons | 15:09 |
vkmc | seems we don't need to get more into detail on why we want to do this | 15:09 |
fpantano | +1 | 15:09 |
vkmc | so, I guess, we are already covering this | 15:09 |
vkmc | but | 15:09 |
vkmc | #topic design approach | 15:10 |
vkmc | as sean-k-mooney mentions, instead of implementing two different plugins | 15:10 |
sean-k-mooney | i think for most porject how ceph is deployed is less relevent then ceph is deployed. we do care about version to a degree but not nessialy is it in a contiaenr or disto pacakage or upstream ceph package | 15:10 |
vkmc | we should work on devstack-ceph-plugin as we know it, allowing users to toggle between the two | 15:10 |
sean-k-mooney | +1 | 15:10 |
vkmc | and, as tosky mentions, it will be easier as well from a ci standpoint | 15:11 |
vkmc | so we don't need to change the jobs definitions | 15:11 |
tosky | so, do we need the switch? | 15:11 |
sean-k-mooney | i think until its stable yes | 15:11 |
vkmc | maybe, with time, we will need to come with a strategy to turn on the option to deploy with cephadm, start testing with that | 15:12 |
sean-k-mooney | i would sugess that for this cycle we likely dont want to enable cephadm by default untill we are happy it work for the majorty of the jobs | 15:12 |
tosky | right | 15:12 |
vkmc | and slowly deprecate the current way of deploying things | 15:12 |
fpantano | agree | 15:12 |
sean-k-mooney | ya so similar to the ovn swtich | 15:12 |
vkmc | which leads me to | 15:12 |
tosky | fpantano: so just to clarify: is ceph-ansible still useful for pacific? I'd say it is, as we use it | 15:12 |
vkmc | #topic initial patch | 15:12 |
tbarron | and devstack-plugin-ceph is branched now so old stable branches can continue as they do now | 15:12 |
fpantano | tosky: afaik it's useful only for upgrades purposes | 15:13 |
vkmc | #link https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484 | 15:13 |
tosky | is there a specific mapping of what is compatible with what from the ceph, ceph-ansible and cephadm point of you? | 15:13 |
tbarron | tosky: we aren't really using ceph-ansible either with the old branches | 15:13 |
tbarron | the approach was even older | 15:13 |
fpantano | tbarron: ah, so ceph-ansible here is out of the picture already | 15:13 |
tosky | oh | 15:13 |
tbarron | i think so, someone correct me if I'm wrong | 15:13 |
tosky | tbarron: by "old branches" do you also include the current master, right? Now that I think about it, the code installs the packages and does some magic | 15:14 |
tbarron | leseb authored a lot of early ceph-ansible and early devstack-plugin-ceph but the latter was not really adapted to use ceph-ansible | 15:14 |
tbarron | even on master | 15:14 |
vkmc | yeah, we don't use ceph-ansible in the plugin | 15:14 |
vkmc | ceph-ansible is being used for other deployment tools, such as a tripleo | 15:15 |
vkmc | but in the devstack-plugin-ceph we have been installing packages and configuring things manually | 15:15 |
tosky | ok, so is it correct to say that the current method used by devstack-plugin-ceph just works because it happened not to use ceph-ansible, but it's an handcrafted set of basic steps? | 15:15 |
sean-k-mooney | does cephadm still have a dep on docker/podman on the host. i have not used it in a while | 15:15 |
vkmc | sean-k-mooney, it does | 15:16 |
tosky | (sorry for the questions, just to have the full current status on the logs) | 15:16 |
tbarron | tosky: +1 (but check me folks) | 15:16 |
fpantano | sean-k-mooney: yes, it has podman dep | 15:16 |
sean-k-mooney | ack | 15:16 |
vkmc | tosky++ | 15:16 |
sean-k-mooney | we shoudl have that avaible in the relevent distros i guess | 15:16 |
sean-k-mooney | but devstack wont set it up by default | 15:16 |
vkmc | right | 15:16 |
sean-k-mooney | so the plugin will have to check and enable it although will cephadm do that for us | 15:16 |
sean-k-mooney | the less we have to maintain the better | 15:17 |
vkmc | sean-k-mooney, https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484/8/devstack/lib/cephadm#100 | 15:17 |
vkmc | agree | 15:17 |
tosky | it should really be "install and run", at least it was on Debian when I switched it - you may just need to enable some parameters depending on the kernel version used, but that could be figured out I guess | 15:17 |
sean-k-mooney | ack | 15:17 |
sean-k-mooney | vkmc: im not sure how may other have this useacse but will you support multi node ceph deployment as part of this work | 15:18 |
sean-k-mooney | we at least will need to copy the keys/ceph config in the zuul jobs to multiple nodes | 15:18 |
vkmc | sean-k-mooney, if we are currently testing that in ci, we will need to see how to do such a deployment with cephadm in our ci, yes | 15:19 |
sean-k-mooney | but do you plan to supprot adding addtion nodes with osds via running devstack and the plugin on multiple nodes | 15:19 |
fpantano | sean-k-mooney: ack, the current review is standalone, but in theory, having 1 node cluster and ssh access to other nodes means you can enroll them as part of the cluster | 15:19 |
tbarron | sean-k-mooney: cephadm can do scale-out well | 15:19 |
sean-k-mooney | i think nova really only uses one ceph node(the contoler) and copies the keys/config and install the ceph client on the compute | 15:19 |
tbarron | sean-k-mooney: oh, you are talking multiple ceph clients rather than multi-node ceph cluster, right? | 15:20 |
sean-k-mooney | well potentally both | 15:20 |
sean-k-mooney | the former is used today | 15:20 |
tbarron | I don't see any issue either way. | 15:21 |
sean-k-mooney | the later might be nice for some glance/cinder/genade testing | 15:21 |
sean-k-mooney | sorry that is a bit off topic please continue | 15:21 |
fpantano | tbarron: me neither, let's have a plan (and maybe multiple reviews) to reach that status | 15:21 |
tbarron | fpantano: +1 | 15:21 |
vkmc | fpantano++ | 15:22 |
tosky | multiple ceph clients is probably the first step as it may be used in multinode jobs | 15:22 |
sean-k-mooney | s /may/will :) | 15:22 |
fpantano | +1 | 15:22 |
sean-k-mooney | although i think we do that via roles in zull today | 15:22 |
sean-k-mooney | so it might not be fully in scope fo the plug but rather the job defitions/roles | 15:23 |
tbarron | and probably that part won't change, but as fpantano says, can look in future reviews | 15:23 |
sean-k-mooney | i belive how it work today it the first node complete then we just copy the data to the others | 15:23 |
vkmc | so I take this is an scenario needed by nova, right? | 15:23 |
sean-k-mooney | then stack on the remainder so that should be pretty independet | 15:23 |
sean-k-mooney | the ceph multinode job does live/colde migration testing | 15:24 |
sean-k-mooney | so both computes need to be able to connect to the singel ceph instance on the contoler | 15:24 |
sean-k-mooney | that could be step 2 howewver since we can use the legacy install mode for multinode for now | 15:25 |
fpantano | yeah, I'd say let's have a single node working, then multinode is the goal and the CI can be tested against this scenario | 15:25 |
vkmc | fpantano++ | 15:25 |
vkmc | ok, so... in theory, cephadm is a tool that can be used to deploy a single node cluster working, and then we can scale as needed | 15:27 |
vkmc | and test different scenarios | 15:27 |
vkmc | depending on what we need | 15:27 |
vkmc | I'm not familiar with all the different scenarios we need to test, I guess this varies between services | 15:28 |
vkmc | so that leads me to | 15:28 |
vkmc | shall we establish some SMEs for each of the services to work on this? | 15:28 |
sean-k-mooney | well one way to test is a DNM patch to the differnet service that flips the flag | 15:29 |
sean-k-mooney | if the jobs explode then we know there is work to do | 15:29 |
sean-k-mooney | zuul speculitive execution is nice that way | 15:29 |
tbarron | s/if/when/ | 15:29 |
sean-k-mooney | :) | 15:29 |
vkmc | all right :) | 15:30 |
tosky | the jobs currently defind in devstack-plugin-ceph already sets up various services, I'd say having a cephadm version of them passing is step 1 | 15:30 |
tbarron | +1 | 15:31 |
fpantano | ++ /me not familiar with the current jobs but I agree, we can enable cephadm first and see how it goes and what we missed on the first place ^ | 15:31 |
sean-k-mooney | nova has 2 cpeh jobs but both inherit form devstack-plugin-ceph-tempest-py3 and devstack-plugin-ceph-multinode-tempest-py3 | 15:32 |
tosky | the WIP patch can switch all of them, and when it's stable we can simply duplicate them | 15:32 |
tosky | sean-k-mooney: so if you depend on the DNM patch which use the new defaults you should see the results, and same for the jobs in cinder which inherits from those devstack-plugin-ceph jobs | 15:33 |
sean-k-mooney | yes | 15:33 |
sean-k-mooney | we can have one patch to nova that does that and we can recheck it and tweak until they are happy | 15:33 |
tbarron | so I see nova, cinder, manila folks here, but not the other consumer, glance. | 15:34 |
* tbarron exempts rosmaita since he's cinder ptl | 15:34 | |
sean-k-mooney | well also maybe ironic? | 15:34 |
tbarron | would be if I forgot them | 15:35 |
sean-k-mooney | do they supprot boot form ceph via the iscis gateway | 15:35 |
tbarron | sean-k-mooney: i dunno. they talk about it in ptg. | 15:35 |
sean-k-mooney | i know they have cidner boot supprot with iscsi but no idea if they test that with ceph and the iscis gateway | 15:35 |
sean-k-mooney | proably not | 15:35 |
sean-k-mooney | at east with devstack | 15:35 |
sean-k-mooney | nova's nova-ceph-multistore job | 15:36 |
sean-k-mooney | will test glance | 15:36 |
* tbarron wonders if dansmith can be enticed to help if glance jobs blow up | 15:36 | |
sean-k-mooney | so maybe that is ok for now | 15:36 |
tosky | sean-k-mooney: a quick search with codesearch.openstack.org doesn't show any ironic repository | 15:36 |
sean-k-mooney | ack | 15:36 |
sean-k-mooney | https://github.com/openstack/nova/blob/master/.zuul.yaml#L467 | 15:37 |
* rosmaita is not paying any attention | 15:37 | |
tbarron | he's bragging again | 15:37 |
vkmc | xD | 15:37 |
vkmc | ok so | 15:38 |
sean-k-mooney | so for those not familar with that job we confirue glacne with both the file and ceph backend and test direct usage of images backed by rbd ensuring we dont download them | 15:38 |
sean-k-mooney | so that will fail if glance does not work | 15:38 |
sean-k-mooney | with cephadm | 15:39 |
fpantano | +1 thanks for clarifying ^ | 15:39 |
opendevreview | James Parker proposed openstack/whitebox-tempest-plugin master: [WIP] Add multiple hugepage size job https://review.opendev.org/c/openstack/whitebox-tempest-plugin/+/825011 | 15:41 |
vkmc | which jobs do we want to see passing before getting the cephadm script merged? as a initial step I mean | 15:42 |
tosky | my suggestion are the jobs in the repository itself | 15:43 |
vkmc | very well | 15:43 |
vkmc | so let's start from there | 15:43 |
tosky | in fact there are: | 15:43 |
tosky | (moment) | 15:44 |
dansmith | tbarron: sorry, just saw this, reading back | 15:44 |
tbarron | dansmith: ty | 15:44 |
tosky | - devstack-plugin-ceph-tempest-py3 is the base baseline | 15:44 |
tosky | - devstack-plugin-ceph-multinode-tempest-py3 its multinode version | 15:44 |
tosky | - devstack-plugin-ceph-cephfs-native and devstack-plugin-ceph-cephfs-nfs are non voting, so up to the manila team whether they should be considered as blocking | 15:45 |
tosky | - there is also devstack-plugin-ceph-tempest-fedora-latest which is voting | 15:45 |
sean-k-mooney | yep devstack-plugin-ceph-tempest-py3 is the MVP i think but that is really up to the plugin core team so just my 2 cents | 15:45 |
tosky | - non voting: devstack-plugin-ceph-master-tempest (but that should be probably not too different from the basic tempest job at this point) | 15:46 |
opendevreview | James Parker proposed openstack/whitebox-tempest-plugin master: [WIP] Add multiple hugepage size job https://review.opendev.org/c/openstack/whitebox-tempest-plugin/+/825011 | 15:46 |
tosky | - experimental: devstack-plugin-ceph-compute-local-ephemeral (nova? What is the status of that?) | 15:46 |
dansmith | tbarron: oh, I don't know much about ceph really, I just worked on that job because we really needed test coverage there | 15:46 |
tosky | that's the current status of https://opendev.org/openstack/devstack-plugin-ceph/src/branch/master/.zuul.yaml as far as I can see | 15:46 |
dansmith | tbarron: so yeah I can help if that job blows up in general, but I'm still not quite sure what is being discussed.. some change coming to ceph? | 15:47 |
abhishekk | dansmith, tbarron I will have a look if there is something wrong | 15:47 |
vkmc | abhishekk++ | 15:47 |
tosky | dansmith: tl;dr right now devtack-plugin-ceph deploys ceph manually; it has never used the more blessed way (ceph-ansible), and there is a new official way (cephadm) | 15:47 |
tbarron | abhishekk: dansmith excellent. The idea is to change devstack-plugin-ceph to use cephadm to deploy the target ceph cluster | 15:48 |
tosky | dansmith: we are discussing implementing the support for cephadm for the deployment | 15:48 |
dansmith | ah | 15:48 |
tbarron | in theory client side (e.g. glance) everything would "just work" | 15:48 |
tbarron | heh | 15:48 |
dansmith | well, I don't think that job does anything special during setup, it just wires nova and glance together to do the good stuff | 15:48 |
fpantano | dansmith: and a first review has been submitted: https://review.opendev.org/c/openstack/devstack-plugin-ceph/+/826484 | 15:49 |
dansmith | so if you don't a-break-a the setup, I doubt it will a-break-a the testing :) | 15:49 |
abhishekk | ++ | 15:49 |
dansmith | "special during *ceph* setup I meant" | 15:49 |
tbarron | yup, but it will be good to know whom to consult when the tests break anyways and it's not clear why | 15:51 |
abhishekk | tbarron, you can ping me (I will be around 1800 UTC) | 15:52 |
dansmith | I can deflect with the best of them, so yeah.. count me in :) | 15:52 |
vkmc | dansmith++ | 15:52 |
vkmc | great :) | 15:52 |
vkmc | so abhishekk and dansmith for glance, tosky, rosmaita, eharney and enriquetaso for cinder, sean-k-mooney for nova, tbarron, gouthamr and vkmc for manila | 15:53 |
sean-k-mooney | i may or may not join this meeting every week but feel fee to ping me if needed | 15:54 |
vkmc | once we toggle jobs and if we see issues on specific jobs | 15:54 |
tosky | yep | 15:54 |
*** ykarel_ is now known as ykarel | 15:54 | |
vkmc | sean-k-mooney, good thing you mention this, we won't probably meet every week... maybe our next sync would be in one month from now, depending on how fast we get initial things merged | 15:55 |
vkmc | I'll continue to communicate on the mailing list | 15:55 |
tosky | and once the first results are promising, we can trigger a few other component-specific jobs in the new mode | 15:55 |
vkmc | tosky++ sounds good | 15:55 |
fpantano | +1 we have a plan :D | 15:55 |
vkmc | +1 we do | 15:55 |
tosky | vkmc: technically we want also to hear from QA people, as they are also core on that repository iirc | 15:56 |
vkmc | small bird told me carloss is joining us in the manila side as well :D | 15:56 |
tosky | kopecmartin, gmann ^^ | 15:56 |
carloss | :D | 15:56 |
vkmc | carloss++ | 15:56 |
vkmc | ok, 3 more minutes til top of the hour, so let's wrap here | 15:57 |
tosky | and also because the change impacts jobs which votes on tempest.git itself | 15:57 |
vkmc | next action item is to get review for that initial patch and start enabling it for the different jobs we have in that repo | 15:57 |
vkmc | and we can sync up again in a few weeks | 15:57 |
tosky | enable for testing, and when they are stable, start flipping them in a stable way? | 15:58 |
vkmc | anything else? | 15:58 |
vkmc | tosky, yes | 15:58 |
tosky | perfect, thanks | 15:59 |
vkmc | great! | 15:59 |
vkmc | #endmeeting | 15:59 |
opendevmeet | Meeting ended Fri Jan 28 15:59:42 2022 UTC. Information about MeetBot at http://wiki.debian.org/MeetBot . (v 0.1.4) | 15:59 |
opendevmeet | Minutes: https://meetings.opendev.org/meetings/devstack_cephadm/2022/devstack_cephadm.2022-01-28-15.01.html | 15:59 |
opendevmeet | Minutes (text): https://meetings.opendev.org/meetings/devstack_cephadm/2022/devstack_cephadm.2022-01-28-15.01.txt | 15:59 |
opendevmeet | Log: https://meetings.opendev.org/meetings/devstack_cephadm/2022/devstack_cephadm.2022-01-28-15.01.log.html | 15:59 |
vkmc | thanks everyone for joining and for your feedback! | 15:59 |
fpantano | vkmc++ thanks for starting this! | 15:59 |
tosky | thank you! | 16:00 |
enriquetaso | thanks | 16:00 |
enriquetaso | !! | 16:00 |
opendevmeet | enriquetaso: Error: "!" is not a valid command. | 16:00 |
fpantano | thanks everyone | 16:00 |
enriquetaso | ok | 16:00 |
opendevreview | Benny Kopilov proposed openstack/tempest master: Update volume schema for microversion https://review.opendev.org/c/openstack/tempest/+/826364 | 16:05 |
gmann | kopecmartin: +A | 16:33 |
gmann | tosky: vkmc have not read the log fully but i think agreement is to o in existing plugin. +1 from me. feel free to ping me for help/review on the job modification | 16:39 |
tosky | gmann: yep, existing repo, keep it working; thanks! | 16:44 |
sean-k-mooney | frickler: do you know if there is a cirros uefi boot capable image | 16:50 |
frickler | sean-k-mooney: the readme says it should be possible, but I don't know how to do it. if you need further help, maybe ask directly in #cirros on liberachat https://github.com/cirros-dev/cirros/blob/master/README.md | 17:00 |
*** jpena is now known as jpena|off | 17:06 | |
sean-k-mooney | frickler: no worries ill read up on it | 17:15 |
sean-k-mooney | frickler: all that is requried is for the image to have the efi partion | 17:16 |
sean-k-mooney | then in glace we set an image property | 17:16 |
abhishekk | gmann, any difference in 'devstack_localrc' and 'devstack_local_conf' or they both writes to local.conf file? | 17:19 |
sean-k-mooney | yes they are different | 17:22 |
sean-k-mooney | devstack_localrc does nto override local.conf | 17:22 |
sean-k-mooney | abhishekk: the local.conf support extra seciontion that localrc does not | 17:23 |
tosky | https://opendev.org/openstack/devstack/raw/branch/master/roles/write-devstack-local-conf/README.rst | 17:24 |
sean-k-mooney | the devstack_localrc mapps to the [[local|localrc]] section of the local.conf | 17:24 |
abhishekk | sean-k-mooney, ack, so I can move the contents under 'devstack_localrc' to 'devstack_local_conf' and remove the former | 17:24 |
sean-k-mooney | yes although you can use both at the same time | 17:24 |
abhishekk | sean-k-mooney, tosky ack, thank you | 17:25 |
sean-k-mooney | generhttps://github.com/openstack/nova/blob/master/.zuul.yaml#L174-L193 | 17:26 |
sean-k-mooney | * generally https://github.com/openstack/nova/blob/master/.zuul.yaml#L174-L193 we use local.conf for the config overrides | 17:26 |
sean-k-mooney | and localrc for the devstack macro constants | 17:26 |
sean-k-mooney | both can be done in local_conf but we normally partion it this way | 17:27 |
abhishekk | yeah, same thing we do in glance | 17:27 |
abhishekk | just wanted to check the difference and now I got it :D | 17:27 |
gmann | abhishekk: yeah devstack_local_conf is one you can move/use | 17:32 |
abhishekk | gmann, ack, thanks | 17:33 |
opendevreview | Merged openstack/devstack master: Use distro pip on Ubuntu https://review.opendev.org/c/openstack/devstack/+/825920 | 18:49 |
opendevreview | melanie witt proposed openstack/devstack master: Configure nova unified limits quotas https://review.opendev.org/c/openstack/devstack/+/789962 | 20:00 |
opendevreview | melanie witt proposed openstack/tempest master: Add configuration for compute unified limits feature https://review.opendev.org/c/openstack/tempest/+/790186 | 20:01 |
opendevreview | melanie witt proposed openstack/tempest master: Tests for nova unified quotas https://review.opendev.org/c/openstack/tempest/+/804311 | 20:09 |
opendevreview | Merged openstack/tempest master: Fix test_rebuild_server test by waiting for floating ip https://review.opendev.org/c/openstack/tempest/+/814085 | 20:23 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!