noonedeadpunk | would be sweet to land that to fix adjutant on master: https://review.opendev.org/c/openstack/openstack-ansible/+/893837 | 08:20 |
---|---|---|
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-lxc_container_create master: Properly apply tags for include_tasks https://review.opendev.org/c/openstack/openstack-ansible-lxc_container_create/+/894294 | 11:01 |
farbod | hello again | 11:21 |
farbod | i have a question | 11:21 |
farbod | do we need a range of vlan IDs for linux bridge implementation of OSA and just one VLAN for OVS and OVN? | 11:22 |
farbod | ? | 11:28 |
noonedeadpunk | um, it depends? | 11:33 |
noonedeadpunk | you're not required to have vlan networks at all in any scenario | 11:33 |
farbod | so what is this trunc scenario with ranged VLANs | 11:34 |
farbod | i am really confused | 11:34 |
farbod | are they physical VLAN IDs on my switch? | 11:34 |
farbod | or they are virtual inside the neutron service? | 11:35 |
farbod | why we assign a whole interface to vlan network? | 11:36 |
farbod | and whatis the difference between the br-vlan and br-vxlan network? | 11:36 |
farbod | i am reading these concept about three days in a row and i cant understand it at all! | 11:37 |
farbod | I would be very grateful if someone could explain these concepts to me completely from the basics. | 11:37 |
noonedeadpunk | These should be configred on switches for sure | 11:37 |
noonedeadpunk | It depends on usecases. For example, you might want to provide a separate external or internal networks for one of your customers. | 11:39 |
noonedeadpunk | With having only flat - you're limited with only 1 external network | 11:39 |
farbod | i am not talking about external right now | 11:40 |
farbod | i mean | 11:40 |
farbod | as i understanded | 11:40 |
noonedeadpunk | or well, depending on amount of flat networks ofc. But each flat requires to have a physical interface kind of | 11:40 |
mgariepy | br-vlan can be used if you want to use vlan provider network.It can be used for a variety of things, like exposing some vlan to projects, or using it for external network. | 11:40 |
noonedeadpunk | Though, you can use vlan tagged interfaces for flat networks as well | 11:40 |
farbod | yes i know | 11:40 |
farbod | but imagine this scenario | 11:40 |
jamesdenton | "provider" networks are VLAN networks, and these need to be configured by someone with the admin role. "tenant" networks can also be VLAN networks, and these are automatically assigned by Neutron. That's what the range is for; the allocatable range of VLAN IDs. "Tenant" networks can also be GRE, VXLAN or GENEVE, and there are ranges for those, too | 11:40 |
farbod | so we are limited to VLAN range for the number of networks made by users? | 11:41 |
noonedeadpunk | yeah, it's pretty much depends on your scenario | 11:41 |
noonedeadpunk | farbod: usually, ppl use vxlans for user private networks | 11:41 |
noonedeadpunk | not vlans | 11:42 |
farbod | you mean one physical VLAN is enough for vxlan network and many projects? | 11:42 |
noonedeadpunk | yes | 11:42 |
jamesdenton | VLANs are used for infratructure networks, like the overlay network encapsulates the dozens or hundreds of vxlan networks | 11:43 |
farbod | so you mean packet tagging will be done inside the neutron and not on physical? | 11:43 |
jamesdenton | VLANs can also be used for VM workloads, "tenant networks" | 11:43 |
jamesdenton | yes, Neutron is expected to handle the tagging of packets | 11:43 |
farbod | ok | 11:43 |
farbod | but | 11:43 |
jamesdenton | tagging of frames, rather. the vlan tagging. The switchports need to be configured to allow that. And the VLANs need to exist on the switches | 11:44 |
farbod | whats the difference between overlay and tenant network? | 11:44 |
jamesdenton | well, an overlay network is a network type that "encapsulates" traffic. Meaning, you can run all of the VXLAN-based tenant networks inside a single 'overlay' network. So VLAN 100 might carry traffic for VXLAN 5,10,15 | 11:45 |
farbod | for cross node traffic? | 11:45 |
jamesdenton | the tenant network 'web' is vxlan id 5, and tenant network 'db' is vxlan ID 10 | 11:45 |
jamesdenton | between nodes, yes | 11:45 |
jamesdenton | the nodes build a vxlan mesh | 11:45 |
farbod | and why br-vlan uses multiple VLANs, i mean a range of VLANs for overlay network? | 11:46 |
jamesdenton | that's a mix of concepts | 11:46 |
jamesdenton | for vlan networks you might support vlan IDs 100-999 | 11:47 |
jamesdenton | for vxlan networks you might support vxlan ids 8999-11334 | 11:47 |
farbod | on the switch? | 11:47 |
jamesdenton | vlan on the switch, vxlan inside an overlay network (there's only 1) - the overlay network between hosts might be vlan 50 | 11:48 |
jamesdenton | vxlan traffic gets tunneled through it. | 11:48 |
farbod | aha | 11:48 |
farbod | so we have one tunnel network connected to a physical VLAN on the switch and there are lots of vxlan networks which are tenant network and are virtually inside the neutron? | 11:49 |
farbod | am i right? | 11:49 |
jamesdenton | exactly | 11:49 |
farbod | so | 11:49 |
farbod | ok | 11:49 |
farbod | if we leave this aside | 11:49 |
jamesdenton | and those aren't directly reachble, thus the need for a neutron router - it connects the tenant network (vxlan) to a provider network (vlan) - and through the user of a float IP (nat) allows someone to connect to the vm | 11:49 |
farbod | ok | 11:50 |
jamesdenton | i've got to run - be back in a few hrs | 11:50 |
farbod | thanks | 11:50 |
farbod | anyone else to answer my questions? | 11:50 |
farbod | sorry about that :( | 11:50 |
farbod | ? | 11:51 |
noonedeadpunk | you throw a question and the we'll see :D | 11:55 |
farbod | oh OK | 11:56 |
farbod | sorry | 11:56 |
farbod | i mean | 11:56 |
farbod | i know only one VLAN is enough for overlay network | 11:57 |
farbod | but if it's enough | 11:58 |
farbod | why | 11:58 |
farbod | we use br-vlan | 11:58 |
farbod | with a range of VLAN IDs? | 11:58 |
noonedeadpunk | as I said - you're not obliged to use it | 11:58 |
farbod | why some ppl use it?\ | 11:58 |
noonedeadpunk | We use it to bring owned by customer networks to the region | 11:58 |
noonedeadpunk | So folks came and say - we have an ipv4 network we own - we want to use it in your cloud | 11:59 |
noonedeadpunk | second usecase I personally have - to connect Octavia lbaas network with controllers | 11:59 |
farbod | you mean this br-vlan is only for external networks? | 12:00 |
noonedeadpunk | As octavia-api needs to reach amphoras. One way could be to route that, but another just to use vlan network | 12:00 |
farbod | and not internal ones? | 12:00 |
noonedeadpunk | it can be used for internals as well | 12:00 |
noonedeadpunk | but it can for externals | 12:00 |
farbod | internal in physical space? or inside neutron? | 12:00 |
noonedeadpunk | Like I know folks who don't do flat networks at all - they jsut use vlan for external | 12:00 |
noonedeadpunk | Well. with vlan it's more how you configure it on switches :D | 12:01 |
opendevreview | Merged openstack/openstack-ansible stable/2023.1: Update Adjutant and Neutron SHAs https://review.opendev.org/c/openstack/openstack-ansible/+/893837 | 12:01 |
noonedeadpunk | what neutron do is basically bridge tagged vlan with VM interface | 12:01 |
noonedeadpunk | and creates that tag | 12:01 |
farbod | let me ask my question in another way | 12:02 |
noonedeadpunk | and then it's up to switch and your networking how "internal" it is | 12:02 |
farbod | if i don't configure a br-vlan, what will i miss? | 12:02 |
noonedeadpunk | depending on your usecases I guess | 12:03 |
noonedeadpunk | Like I can imagine how to use just bunch of flat networks instead of vlan | 12:03 |
noonedeadpunk | matter of taste imo | 12:03 |
farbod | so let me explain my scenario | 12:03 |
farbod | i have a bunch of servers with only one physical interface | 12:04 |
farbod | and limited to only 5 VLANs o n physical switch | 12:04 |
noonedeadpunk | if you want to have range of vlans pre-configured on switches that you can use anytime, or create a flat network and run a role each time you need extra one | 12:04 |
farbod | i want to make a private cloud | 12:04 |
noonedeadpunk | yeah, then you don't need that | 12:04 |
farbod | with different projects | 12:04 |
farbod | with their own networks | 12:04 |
farbod | multiple networks | 12:04 |
noonedeadpunk | if you have limitation on amount of vlans on switch | 12:04 |
farbod | and maybe some routing between networks | 12:05 |
noonedeadpunk | just make a flat network as a vlan | 12:05 |
noonedeadpunk | for external connectivity | 12:05 |
noonedeadpunk | and you will be fine | 12:05 |
farbod | but on other hand i want to assign public IPs which are available on one this VLANs | 12:05 |
farbod | actually i can bridge on this vlan | 12:05 |
noonedeadpunk | you can add eth1.100 as a flat network | 12:05 |
noonedeadpunk | no problems with that as long as interface name is consistent | 12:06 |
farbod | OK | 12:06 |
noonedeadpunk | and then eth1.200 as vxlan | 12:06 |
noonedeadpunk | and eth1.300 as mgmt | 12:06 |
noonedeadpunk | and eth1.400 as storage | 12:06 |
farbod | will it make difference if i use LXB OVS OVN? | 12:06 |
noonedeadpunk | I guess you got the idea :) | 12:06 |
farbod | yes i got it | 12:06 |
farbod | and one VLAN for external network yes? | 12:07 |
noonedeadpunk | yes, you just add that eth1.100 as flat network to neutron | 12:07 |
farbod | oh OK | 12:07 |
noonedeadpunk | it can be tagged interface as well | 12:07 |
farbod | sorry for taking your time but do you have a some time to guide me in configuration? | 12:08 |
farbod | i have my configs ready and just want to check them out | 12:08 |
noonedeadpunk | And I wouldn't add just eth1 as flat network, as then neutron will manage it and will add to the bridge | 12:08 |
noonedeadpunk | so you won't be able to create another vlans on it | 12:08 |
farbod | yes i tested that out and made some problems | 12:08 |
noonedeadpunk | Regarding lxb/ovs/ovn - well. I really love lxb but it's barely maintained nowadays. It's even marked as experimental, so it may be gone anytime soon due to limited interest. But I like it most by far, as it's super simple to debug and understand wtf is going on | 12:09 |
farbod | wiil you guide me./ | 12:09 |
farbod | ? | 12:10 |
noonedeadpunk | I can try, but I don't have plenty of time to be frank | 12:10 |
farbod | OK | 12:10 |
farbod | so lets look at this | 12:10 |
farbod | user config yml file: https://paste.opendev.org/show/bzehqFuNZ5GS24011ukP/ | 12:11 |
noonedeadpunk | ovs is kinda evolution of lxb, which I didn't get - jsut more troublesome version that has same drawbacks as lxb basically. OVN is a step forward and kind of future. Has it's own issues, like it's super complex to debug and understand what's wrong it's there | 12:11 |
farbod | what changes shoulf i make to make it compatible with my scenario? | 12:11 |
noonedeadpunk | It's also relatively new, but it solves issues of ovs/lxb with networks namespaces | 12:12 |
farbod | aha | 12:12 |
farbod | infra node network config: https://paste.opendev.org/show/b8UbL8rvYPwML7h8WR85/ | 12:13 |
farbod | and compute and storage node network config: https://paste.opendev.org/show/bbvYa26PW0jVmep6rHYC/ | 12:13 |
farbod | that enp8s0.4040 is the VLAN that have access to providing public IPs | 12:14 |
farbod | which is placed in infra node and its config file | 12:14 |
noonedeadpunk | aha, and you're going with... ovn/ovs/lxb? | 12:15 |
noonedeadpunk | as config will depend on that | 12:15 |
farbod | with lxb | 12:15 |
farbod | i want to be as simple as possible | 12:15 |
farbod | at least for now which my knowledge is limited :) | 12:16 |
noonedeadpunk | ok, then it's easier as I know how to do that lol | 12:16 |
farbod | nice :D | 12:16 |
noonedeadpunk | give me couple of mins | 12:19 |
farbod | OK | 12:19 |
noonedeadpunk | farbod: is interface naming consistent across all hosts? All of them are enp8s0? | 12:23 |
farbod | no | 12:23 |
farbod | the second one is enp41s0 | 12:23 |
farbod | the compute and storage one | 12:23 |
noonedeadpunk | and it's ubuntu 20.04? | 12:24 |
farbod | debian 12 | 12:24 |
noonedeadpunk | doesn't it come with netplan? | 12:25 |
farbod | no | 12:25 |
noonedeadpunk | Also I don't think we support debian 12 yet | 12:25 |
farbod | i edit | 12:25 |
farbod | oh soory | 12:26 |
farbod | debian 11 | 12:26 |
farbod | yes | 12:26 |
farbod | i tried the 12 one | 12:26 |
mgariepy | systemd-networkd can rename the interface as well. | 12:26 |
noonedeadpunk | aha, ok, as 12 comes with netplan | 12:26 |
farbod | i got error in deployment | 12:26 |
farbod | yes | 12:26 |
noonedeadpunk | farbod: so, you'll need either to rename interfaces, or create a bridge for enp8s0.4040 | 12:28 |
farbod | i prefer the bridge one | 12:28 |
noonedeadpunk | you actually had quite fair config with 2 small nits | 12:33 |
noonedeadpunk | here's edited one: https://paste.openstack.org/show/bkPQyY8sy4JcftArFaTa/ | 12:33 |
noonedeadpunk | I assumed you will create br-ext enp8s0.4040 in it | 12:33 |
farbod | can you guide me thriugh that ? :) i think i will miss configure it. | 12:34 |
mgariepy | noonedeadpunk, do you have some pci passthrouth with zed somewhere? | 12:34 |
mgariepy | my config looks ok but the scheduler do not want to cooperate lol | 12:35 |
noonedeadpunk | nah, I don't | 12:35 |
noonedeadpunk | I do have gpu's but they're vgpus | 12:35 |
noonedeadpunk | and they're on Xena still | 12:36 |
mgariepy | ok | 12:36 |
noonedeadpunk | will upgrade to 2023.1 in 2 weeks though | 12:36 |
mgariepy | i'm not quite sure if it's an issue with zed since it seems to be a half release or with something else. | 12:37 |
mgariepy | The DB do have the pci device available. | 12:37 |
noonedeadpunk | but it's placement then, not scheduler you mean? | 12:37 |
mgariepy | i'ts not in placement | 12:38 |
noonedeadpunk | As nova asks placement for pci device and then scheduler ofc decides if there's a match from available resources | 12:38 |
noonedeadpunk | so placement gives resources where it's possible to schedule? | 12:38 |
noonedeadpunk | as pci devies are kinda among resource providers | 12:39 |
mgariepy | it passes until pcipassthroughfilter. | 12:39 |
noonedeadpunk | aha | 12:39 |
noonedeadpunk | ok, we don't use that at all | 12:39 |
farbod | noonedeadpunk: is this ok for infra node network config file?: https://paste.opendev.org/show/bLWVt6kVFYkcObRbAJFY/ | 12:41 |
farbod | and should i have this br-ex bridg on compute nodes? | 12:41 |
noonedeadpunk | yes, you should | 12:41 |
noonedeadpunk | or well | 12:41 |
noonedeadpunk | again depending on your scenario :) | 12:42 |
farbod | i mean | 12:42 |
noonedeadpunk | if you want to have VMs directly connected to external network - then yes | 12:42 |
farbod | the public IP won't route from network which here is infra node? | 12:42 |
noonedeadpunk | Though there's another way around, by having only internal (vxlan) network on VM, then have neutron router, basicaly ip network namespace on infra, which connects internal and external networks with nart | 12:43 |
noonedeadpunk | *nat | 12:43 |
farbod | oh | 12:43 |
farbod | i get it | 12:43 |
noonedeadpunk | and then use floating ip to access from external network to vm through public IPs | 12:43 |
farbod | yes | 12:43 |
farbod | did you check the infra network config file which i sent above? | 12:44 |
noonedeadpunk | looks okeyish | 12:44 |
farbod | what about this storage compute node config? : https://paste.opendev.org/show/bjwsAFI3I5F5sRmQVAnY/ | 12:45 |
noonedeadpunk | I honestly not sure about storage and how LVM works with scsi, as I never really used that outside of AIO setups | 12:46 |
noonedeadpunk | But excluding this part - looks fair | 12:46 |
farbod | ok | 12:46 |
farbod | should i do additional configs for this storage or its enough? | 12:46 |
noonedeadpunk | (and if glance needs storage network at all, as I believe it will jsut store locally as files | 12:47 |
farbod | is there any simplest way to config this storage node? | 12:49 |
farbod | i mean an automated approach | 12:51 |
farbod | i just want a storage for running vms | 12:51 |
mgariepy | any issue to upgrade to 2023.1 from zed right now ? | 12:52 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: [doc] Re-order OVN diagrams in networking guide https://review.opendev.org/c/openstack/openstack-ansible/+/894297 | 12:52 |
noonedeadpunk | farbod: I believe your usecase if described well enough by this usecase: https://docs.openstack.org/openstack-ansible/latest/user/network-arch/example.html#single-interface-or-bond | 12:53 |
noonedeadpunk | mgariepy: if you upgrade to stable/2023.1 | 12:54 |
farbod | noonedeadpunk: what is the simplest way to make an storage? | 12:55 |
farbod | in my config there are LVM backends that i don't want | 12:55 |
noonedeadpunk | probably simplest is .... NFS? | 12:55 |
noonedeadpunk | I hate it though | 12:55 |
farbod | should i do configs on the host? | 12:56 |
noonedeadpunk | but at least that is shared storage that you technically can scale.... | 12:56 |
noonedeadpunk | we don't provide means to install nfs, and yes, you'll need to apply some configs to make use of it | 12:56 |
noonedeadpunk | but it's pretty straighforward I'd say | 12:57 |
farbod | any docs for it? | 12:57 |
noonedeadpunk | that;'s for cinder: https://docs.openstack.org/openstack-ansible-os_cinder/latest/configure-cinder.html#nfs-backend | 12:57 |
noonedeadpunk | for glance it was matter of configuring mount iirc that can be done with role | 12:58 |
noonedeadpunk | like that: https://opendev.org/openstack/openstack-ansible-os_glance/src/branch/master/defaults/main.yml#L247-L252 | 12:58 |
farbod | 👍️ | 12:59 |
admin1 | mgariepy, for me upgrade broke the nova-console and terraform also broke :D | 13:00 |
admin1 | rest look fine | 13:00 |
noonedeadpunk | For us everything worked quite nicely. 2 small bugs wrt custom features we don't test in gates | 13:01 |
noonedeadpunk | one regarding vpnaas and another for wiping nova db periodically | 13:01 |
mgariepy | what broke in terraform ? | 13:03 |
mgariepy | admin1, ? | 13:03 |
mgariepy | stuff not tested in gate is always a bit more tricky lol. | 13:03 |
mgariepy | it's always there that issue occurs :) also DB with more history sometimes causes issue. | 13:04 |
mgariepy | nova: pci.report_in_placement = False in zed,. and to be frank i'm not even sure placement support pci correcly on zed. | 13:29 |
noonedeadpunk | yeah, it should | 13:30 |
noonedeadpunk | as we report gpu as pci in xena | 13:30 |
mgariepy | it doesnt support the traits stuff in the device. | 13:30 |
mgariepy | hmm do you set report_in_placement to true ? | 13:30 |
mgariepy | not even an option there :s | 13:31 |
noonedeadpunk | hm, no... though we do see gpu PCI devices as resources that are part of compute node | 13:36 |
noonedeadpunk | though we use `enabled_mdev_types` - so it might be different | 13:37 |
noonedeadpunk | but it;s like that: https://paste.openstack.org/show/bMmC9aHqRPx3HsdxpCWd/ | 13:37 |
noonedeadpunk | bug I guess while it's look like PCI, it's not in fact the same | 13:37 |
opendevreview | Christian Rohmann proposed openstack/openstack-ansible-haproxy_server stable/yoga: Make use of haproxy_rise and haproxy_fall variables https://review.opendev.org/c/openstack/openstack-ansible-haproxy_server/+/894299 | 13:38 |
mgariepy | in the placement db you have these ressource ? | 13:40 |
mgariepy | it's probably a bit different for these.. | 13:40 |
noonedeadpunk | I do | 13:46 |
mgariepy | it's not type 3 is it? | 13:47 |
mgariepy | type: PCI_DEVICE | 13:47 |
mgariepy | probably like 10 ? VGPU or something | 13:47 |
noonedeadpunk | https://paste.openstack.org/show/bGSM4H5BBIxgwObJYTYg/ | 13:48 |
noonedeadpunk | how to check type? | 13:48 |
mgariepy | mysql | 13:49 |
noonedeadpunk | yeah, resource_class VGPU | 13:49 |
noonedeadpunk | checked with `openstack resource provider inventory list` | 13:49 |
mgariepy | so it is indeed different | 13:49 |
noonedeadpunk | despite it's `local_pci_0000_25_01_1` | 13:50 |
mgariepy | yeah | 13:50 |
noonedeadpunk | well, sorry for confusing you :) | 13:50 |
mgariepy | zed really seems to be half release on this part lol. | 13:50 |
mgariepy | no worries | 13:50 |
opendevreview | Merged openstack/openstack-ansible master: Bump SHA for openstack-ansible-plugins collection https://review.opendev.org/c/openstack/openstack-ansible/+/893835 | 13:59 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Stop reffering _member_ role https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/891462 | 14:04 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_adjutant master: Use proper galera port in configuration https://review.opendev.org/c/openstack/openstack-ansible-os_adjutant/+/890092 | 14:04 |
noonedeadpunk | I hope adjutant should be fixed now | 14:04 |
farbod | hi again :) | 14:28 |
farbod | when i try to make volumes i get this message -> schedule allocate volume:Could not find any available weighted backend. | 14:29 |
opendevreview | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: [doc] Add example network architectures for OVN https://review.opendev.org/c/openstack/openstack-ansible/+/894384 | 16:23 |
noonedeadpunk | jamesdenton: it would be awesome if you could take a look at this thing, as I'm not 100% sure how true is that ^ | 16:24 |
noonedeadpunk | FWIW, I will be mostly away during next week | 16:24 |
noonedeadpunk | so glhf :D | 16:25 |
mgariepy | have fun also . | 16:30 |
mgariepy | as a matter of preference i do prefer not having lxb in the way of ovn | 16:30 |
mgariepy | only uses lxb on ctrl for lxc containers | 16:32 |
noonedeadpunk | I played with ovs for lxc and it worked nicely as well just in case | 16:32 |
noonedeadpunk | not sure why, but it works :) | 16:32 |
mgariepy | lol | 16:32 |
mgariepy | yeah indeed it should. | 16:32 |
mgariepy | i prefer lxb on it because i'm lazy | 16:33 |
noonedeadpunk | but it's same exactly - just need to add `openvswitch` somewhere in openstack_user_config | 16:33 |
mgariepy | iptables works just fine on lxb and physical interface. while i use flows for everyting else. | 16:34 |
noonedeadpunk | we have doc for that I'm pretty sure | 16:34 |
noonedeadpunk | And ovs bridges were created with maas in my case | 16:34 |
mgariepy | maas :sick: | 16:34 |
noonedeadpunk | So it was very-very lazy | 16:34 |
mgariepy | i don't like maas lol | 16:34 |
noonedeadpunk | It has things to love it and to hate it | 16:35 |
mgariepy | the network part was not good enough for me. | 16:35 |
noonedeadpunk | They've release ansible collection to manage it through API in an automated way couple of month ago | 16:35 |
mgariepy | waiting on incus to release :D. | 16:36 |
noonedeadpunk | yeah..... | 16:36 |
mgariepy | probably better with ansible than mass | jq .. | 16:36 |
noonedeadpunk | https://github.com/maas/ansible-collection/tree/main/plugins/modules | 16:37 |
noonedeadpunk | I'd say ironic comparing to maas is too complicated for personel responsible for hardware testing and provisioning | 16:38 |
noonedeadpunk | but yeah, it's too narrowed in some cases indeed | 16:39 |
mgariepy | yeah i did tested bifrost a bit... and finally dropped it | 16:39 |
noonedeadpunk | like having DNS but without good clustering and HA | 16:39 |
noonedeadpunk | no debian :D | 16:40 |
noonedeadpunk | so, what do you do then? | 16:40 |
farbod | can we deploy a test enviroment without cinder or any other block storage? | 16:41 |
noonedeadpunk | you can | 16:41 |
farbod | i mean i tried to run an instance without block storage but i got no volume backend | 16:41 |
noonedeadpunk | just don't define storage* in openstack_user_config | 16:41 |
noonedeadpunk | um. you should be able to avoid using block storage | 16:41 |
mgariepy | running on research grant i dont have new machines oftens. so i only have a pxe with some jinja for the preseeding via curtain | 16:42 |
noonedeadpunk | but it depends on the VM creation command | 16:42 |
mgariepy | curtin | 16:42 |
farbod | is it possible from dashboard? | 16:42 |
farbod | you mean using Ephemeral? | 16:42 |
noonedeadpunk | Um, I think it is? You need to have flavors with some disk (more then 0) and then when creating server select "create volume: no" | 16:43 |
noonedeadpunk | yes | 16:43 |
farbod | aha ok Thanks | 16:43 |
noonedeadpunk | mgariepy: I see | 16:43 |
mgariepy | i'ts not perfect but works kinda ok. | 16:45 |
farbod | should i deploy storage-infra_hosts when i don't need the storage node? | 16:49 |
farbod | i mean cinder api service | 16:49 |
noonedeadpunk | no, neither it nor storage_hosts | 16:50 |
admin1 | mgariepy , this is what got broke -> https://gist.githubusercontent.com/a1git/2efcfd956f342333070f04b7bc048f6f/raw/0bc97dbc4888fae7bd97505557f73a3eb2186480/gistfile1.txt .. ( i was changing water in the aquarium, did not see the msg earlier ) | 16:50 |
noonedeadpunk | was it sea aquarium, like with slated water, corrals, etc ?:) | 16:50 |
admin1 | nah .. tap water kind with guppies | 16:51 |
noonedeadpunk | that is too easy lol | 16:51 |
mgariepy | haha. | 16:52 |
admin1 | well, my aquarium is in a corner and i have to run a 30 meter pipe through living room from kitchen to do it :) | 16:52 |
noonedeadpunk | Well, we were running a sea aquarium in the office back in the days | 16:52 |
noonedeadpunk | And from time to time creatures were just disspearing there, including fish | 16:53 |
noonedeadpunk | I assume fish was feeling bad and has been eaten by corals | 16:53 |
noonedeadpunk | But nobody knows for sure | 16:54 |
noonedeadpunk | They were just never seen again | 16:54 |
mgariepy | weird | 16:54 |
mgariepy | were they flying fish ? ;p | 16:55 |
noonedeadpunk | well, when we tried to re-arrange rocks there, we found like thousands of worms hinding in there, that we didn't see during the day | 16:56 |
noonedeadpunk | and we never put any worms there, just in case - they likely evolved somehow | 16:56 |
mgariepy | rocky9 disk full | 16:56 |
mgariepy | :/ why ? | 16:57 |
mgariepy | https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/893773 | 16:57 |
noonedeadpunk | because of ovs 2.17 | 16:57 |
noonedeadpunk | but no idea why it is still 2.17 | 16:57 |
noonedeadpunk | We've released the lock we had | 16:57 |
noonedeadpunk | And I wasn't able to see 2.17 installed in local aio | 16:57 |
noonedeadpunk | but it somehow is in CI | 16:57 |
noonedeadpunk | so try to re-check | 16:58 |
noonedeadpunk | ah, and we also have patch on master that allows to limit journald size | 16:58 |
noonedeadpunk | it wasn't backported though | 16:58 |
noonedeadpunk | btw, would be awesome to land adjutant things on master as well https://review.opendev.org/q/project:openstack/openstack-ansible-os_adjutant+status:open | 16:58 |
mgariepy | shoudl we backport the journald bits>? | 17:02 |
mgariepy | ha. not fully merged yert. | 17:03 |
mgariepy | yet** | 17:03 |
noonedeadpunk | worth to understand how in the world we get ovs 2.17 instead of 3.1 | 17:06 |
noonedeadpunk | anyway, checking out for the week :) | 17:07 |
mgariepy | ok have fun | 17:07 |
farbod | why i get this error: Danger: An error occurred. Please try again later. when i want to create network from admin section in horizon? | 18:00 |
jamesdenton | noonedeadpunk i will take a look at those docs, thank you | 18:21 |
Generated by irclog2html.py 2.17.3 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!