*** macz_ has quit IRC | 00:14 | |
*** cshen has quit IRC | 02:04 | |
*** spatel has joined #openstack-ansible | 02:38 | |
*** spatel has quit IRC | 02:42 | |
-openstackstatus- NOTICE: The Gerrit service on review.opendev.org is being restarted quickly to enable support for Git protocol v2, downtime should be less than 5 minutes | 02:53 | |
*** pto has joined #openstack-ansible | 05:31 | |
*** pto has quit IRC | 05:31 | |
*** pto has joined #openstack-ansible | 05:32 | |
*** evrardjp_ has quit IRC | 05:33 | |
*** evrardjp has joined #openstack-ansible | 05:33 | |
*** pto has quit IRC | 05:38 | |
*** pto_ has joined #openstack-ansible | 05:38 | |
*** pto_ has quit IRC | 05:45 | |
*** pto has joined #openstack-ansible | 05:46 | |
*** lemko0 has joined #openstack-ansible | 05:47 | |
*** lemko has quit IRC | 05:50 | |
*** lemko0 is now known as lemko | 05:50 | |
*** maharg101 has joined #openstack-ansible | 07:34 | |
*** maharg101 has quit IRC | 07:38 | |
*** jbadiapa has joined #openstack-ansible | 07:42 | |
*** pcaruana has joined #openstack-ansible | 07:56 | |
*** pcaruana has quit IRC | 07:57 | |
*** rpittau|afk is now known as rpittau | 07:57 | |
*** pcaruana has joined #openstack-ansible | 08:04 | |
*** pcaruana has quit IRC | 08:04 | |
*** pcaruana has joined #openstack-ansible | 08:05 | |
*** pcaruana has quit IRC | 08:05 | |
*** pcaruana has joined #openstack-ansible | 08:12 | |
*** pcaruana has quit IRC | 08:12 | |
*** cshen has joined #openstack-ansible | 08:15 | |
*** evrardjp has quit IRC | 08:15 | |
*** evrardjp has joined #openstack-ansible | 08:16 | |
*** miloa has joined #openstack-ansible | 08:19 | |
*** newtim has quit IRC | 08:19 | |
*** andrewbonney has joined #openstack-ansible | 08:22 | |
*** maharg101 has joined #openstack-ansible | 08:26 | |
admin0 | \o | 08:27 |
---|---|---|
*** tosky has joined #openstack-ansible | 08:47 | |
noonedeadpunk | mornings | 09:42 |
openstackgerrit | Andrew Bonney proposed openstack/openstack-ansible-os_keystone master: Add security.txt file hosting to keystone https://review.opendev.org/c/openstack/openstack-ansible-os_keystone/+/766437 | 09:59 |
*** macz_ has joined #openstack-ansible | 10:09 | |
*** macz_ has quit IRC | 10:14 | |
admin0 | applying octavia in prod, i get fatal: [c3_octavia_server_container-692ae034]: FAILED! => {"attempts": 5, "changed": false, "msg": "openstacksdk is required for this module"} | 10:16 |
admin0 | 10:16 | |
admin0 | what does that mean ? | 10:16 |
admin0 | do i need to rebuild utility . or the containers itself | 10:17 |
noonedeadpunk | can you at least post what task is failing? | 10:18 |
admin0 | sorry .. https://gist.github.com/a1git/1c2364a0fd8b7573908d6c87c66c5d97 | 10:19 |
noonedeadpunk | uh | 10:20 |
noonedeadpunk | let me patch this out.... | 10:21 |
admin0 | it didn't happen in the lab . but applying from lab -> prod ;( | 10:22 |
admin0 | is it an easy patch that i can manually apply for the time being ? | 10:22 |
noonedeadpunk | yep, you will be aple to copy cherry-pick command from gerrit "Download" menu and just run it inside /etc/ansible/roles/os_octavia | 10:23 |
admin0 | ok .. waiting for it :) | 10:31 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: Delegate info gathering to setup host https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/766693 | 10:35 |
noonedeadpunk | admin0: ^ | 10:35 |
admin0 | thanks ..re-running | 10:37 |
noonedeadpunk | jrosser: (not critical, when you will be around - monday or smth) - do you recall issue with live migrations and apparmour when we were about to patch some rule for nova but this has been fixed either by libvirt package or smth? | 10:37 |
noonedeadpunk | or we've added some variable to provide custom path for libvirt.... | 10:38 |
admin0 | noonedeadpunk, nope: https://gist.githubusercontent.com/a1git/e689f3cea602e718c40a91c715c9fe1a/raw/f242a882359e7444037557bf2e9702f4232ec509/gistfile1.txt | 10:41 |
admin0 | failed with something else | 10:42 |
noonedeadpunk | ah | 10:42 |
noonedeadpunk | fair | 10:42 |
admin0 | fair :D ? lol | 10:42 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: Delegate info gathering to setup host https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/766693 | 10:44 |
noonedeadpunk | yeah, my bad, fixed | 10:44 |
*** macz_ has joined #openstack-ansible | 10:45 | |
*** macz_ has quit IRC | 10:49 | |
admin0 | re-running | 10:52 |
*** kukacz has quit IRC | 10:53 | |
admin0 | noonedeadpunk, still something there: https://gist.githubusercontent.com/a1git/8e448e3c54ebfff81fde6e4bb048ccd9/raw/c4099077d34e312532590b367d08c746b2328c80/gistfile1.txt | 10:56 |
admin0 | is there any info you want me to do/check noonedeadpunk ? | 11:08 |
*** kukacz has joined #openstack-ansible | 11:16 | |
noonedeadpunk | hm | 11:19 |
noonedeadpunk | sorry, was a bit busy | 11:19 |
admin0 | if there is any checks you want me to do .. manual ones for path or etc, i can do it | 11:21 |
admin0 | before it fails again, for you to validate anything | 11:21 |
noonedeadpunk | and what oa version are you running? | 11:23 |
noonedeadpunk | *osa | 11:24 |
noonedeadpunk | is it train? | 11:24 |
*** odyssey4me has joined #openstack-ansible | 11:37 | |
admin0 | ussuri | 11:45 |
*** SecOpsNinja has joined #openstack-ansible | 11:47 | |
SecOpsNinja | hi to all. where can i find the relation list/information regaridng nova and other databases? im having gosts vms that i cant delete using dashboard and cli so the only solution whould be to delete it from database | 11:49 |
*** sshnaidm is now known as sshnaidm|off | 11:59 | |
noonedeadpunk | admin0: hm. can you kindly check that you don't have overrides of `octavia_service_setup_host_python_interpreter` or `openstack_service_setup_host_python_interpreter` or just `ansible_python_interpreter` for host/group_vars | 11:59 |
admin0 | none noonedeadpunk | 11:59 |
noonedeadpunk | ah wairt | 12:01 |
noonedeadpunk | lol | 12:02 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible-os_octavia master: Delegate info gathering to setup host https://review.opendev.org/c/openstack/openstack-ansible-os_octavia/+/766693 | 12:03 |
noonedeadpunk | I'm just blind | 12:03 |
openstackgerrit | Merged openstack/openstack-ansible-openstack_hosts master: Fix libsystemd version for Centos https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/766030 | 12:14 |
*** pto has quit IRC | 12:21 | |
*** pto_ has joined #openstack-ansible | 12:21 | |
*** newtim_ has joined #openstack-ansible | 12:51 | |
noonedeadpunk | admin0: does ^ work for you now? | 13:13 |
openstackgerrit | Dmitriy Rabotyagov proposed openstack/openstack-ansible master: Make container_bridge optional for provider networks https://review.opendev.org/c/openstack/openstack-ansible/+/765174 | 13:16 |
admin0 | noonedeadpunk, trying now sorry .. had been in 2 meetings | 13:19 |
admin0 | how do i tell bootstrap-ansible to download/clone only the os_octavia role | 13:23 |
admin0 | i accidently deleted the branch | 13:23 |
*** pto_ has quit IRC | 13:24 | |
*** pto has joined #openstack-ansible | 13:24 | |
*** macz_ has joined #openstack-ansible | 13:25 | |
noonedeadpunk | you cant | 13:28 |
noonedeadpunk | you can just manually clone and checkout to specific sha though | 13:28 |
*** pto has quit IRC | 13:29 | |
*** pto has joined #openstack-ansible | 13:29 | |
*** macz_ has quit IRC | 13:30 | |
admin0 | noonedeadpunk, its running now :) | 13:33 |
admin0 | with the latest patch | 13:33 |
admin0 | and -vvv | 13:33 |
openstackgerrit | Jonathan Rosser proposed openstack/openstack-ansible-openstack_hosts stable/ussuri: Fix libsystemd version for Centos https://review.opendev.org/c/openstack/openstack-ansible-openstack_hosts/+/766707 | 13:42 |
admin0 | noonedeadpunk, no errors with that step and it passed ( but 1 container failed) so redoing that one | 13:43 |
admin0 | noonedeadpunk, i logged into reviews .. can't i just +1 it ? | 13:45 |
*** pto has quit IRC | 13:46 | |
noonedeadpunk | I thoink you should be able to +1 it | 13:46 |
admin0 | i don't see that option in the ui | 13:49 |
noonedeadpunk | There should be blue Reply button at | 13:50 |
noonedeadpunk | once you press it opens window with voting and you can leave some comment as well | 13:50 |
*** miloa has quit IRC | 13:56 | |
*** spatel has joined #openstack-ansible | 14:01 | |
openstackgerrit | likui proposed openstack/openstack-ansible-os_nova master: Reuse the docs deps to benefit from constraints https://review.opendev.org/c/openstack/openstack-ansible-os_nova/+/766737 | 14:05 |
*** cshen has quit IRC | 14:25 | |
*** kukacz has quit IRC | 14:46 | |
*** rpittau is now known as rpittau|afk | 15:15 | |
*** cshen has joined #openstack-ansible | 15:19 | |
*** kukacz has joined #openstack-ansible | 15:20 | |
spatel | How difficult is to run ipv6 based openstack cloud? | 15:21 |
noonedeadpunk | not so much I'd say. But all depends on the network setup you have. As it might start from just adding ipv6 network to the public net and end with neutron-bgp | 15:22 |
*** cshen has quit IRC | 15:23 | |
*** macz_ has joined #openstack-ansible | 15:26 | |
spatel | neutron-bgp? | 15:28 |
spatel | If i setup IPv6 public IP pool to my router and setup VLAN like ipv4, then all i need to just create ipv6 network and thats all right? | 15:29 |
*** cshen has joined #openstack-ansible | 15:29 | |
*** macz_ has quit IRC | 15:31 | |
noonedeadpunk | yeah, it would be the easiest way:) you can even add just another ipv6 pool to existing public network, in case you allow VMs just to get public IP | 15:31 |
noonedeadpunk | (ie some might require to have floating ip for having public ipv4 - then public vlan is not accessible for vms and you need some workaround to have) | 15:32 |
spatel | I don't have floating IPs, I have pure VLAN provider | 15:33 |
spatel | Do i need to tell OSA anywhere for ipv6 or its transparent ? | 15:33 |
noonedeadpunk | you do in case you want API endpoints to be accessible via ipv4 | 15:34 |
noonedeadpunk | then you need to have ipv6 VIP in addition to ipv4 | 15:34 |
noonedeadpunk | *to be accessible via ipv6 | 15:35 |
noonedeadpunk | I think we need some doc page describing this... | 15:37 |
noonedeadpunk | so sepcificly I can recall about `extra_lb_tls_vip_addresses` (which should be your public ipv6 vip) and I think rewrite keepalived_instances to include ipv6 into `vips_excluded` key.... | 15:46 |
spatel | I don't think i need ipv5 endpoint | 15:46 |
spatel | ipv6* | 15:46 |
noonedeadpunk | we totally should make this cleaner and document.... | 15:46 |
spatel | all i need ipv6 public network for VM to access from public | 15:47 |
noonedeadpunk | then you should not need osa changes I think | 15:47 |
noonedeadpunk | but again - you know who is netwroking expert here :p | 15:47 |
spatel | noonedeadpunk: thank you! just trying to understand if i am missing something here. but look like its very simple as far as network handle ipv6 traffic | 15:53 |
spatel | noonedeadpunk: any word on Victoria release :) | 15:54 |
spatel | sorry for pushing but its kind of holding my deployment :( | 15:54 |
noonedeadpunk | I'd love to do it, but techincaly can't until gates are broken | 15:55 |
noonedeadpunk | I really was ready on monday but then 8.3 released... | 15:55 |
noonedeadpunk | https://review.opendev.org/c/openstack/openstack-ansible/+/766244 has failed in gates.... | 15:56 |
noonedeadpunk | so we need to recheck and pray actually... | 15:57 |
spatel | :( | 16:02 |
admin0 | spatel, yes .. | 16:03 |
admin0 | just add the ipv6 like any ipv6 | 16:03 |
admin0 | with the gw in the router and the range in a vlan | 16:03 |
admin0 | and it will just work out of the box | 16:03 |
spatel | +1 that is what i want | 16:04 |
admin0 | spatel, could we not have just used br-vlan.27 directly ? | 16:04 |
admin0 | instead of creating that v-eth-pair ? | 16:04 |
admin0 | moving from poc -> prod, I see packets going out of the box .. but not arriving back .. network guys say its OK in their end | 16:04 |
spatel | You can do that directly also in that case you need to create br-lbaas on each compute nodes and create lb-lbaas-mgmt network to tell use br-lbaas bridge to wire up amphora | 16:05 |
spatel | In short make sure you amphora directly talk to octavia container.. whatever technology you use.. | 16:06 |
admin0 | aah .. create br-lbaas on each compute nodes -- did not worked .. as the error is cannot add a bridge to a bridge . then used mgariepy method .. and it was all mixed up | 16:06 |
spatel | You need to tell neutron to attach that amphora to br-lbaas bridge instead of br-vlan | 16:07 |
spatel | In my method neutron create VLAN.27 port inside br-vlan and attach that to amphora | 16:08 |
spatel | but if you want separate bridge then just tell neutron create VLAN.27 port on br-lbaas bridge and attach that to amphora | 16:09 |
spatel | Finally i created monitoring script to monitoring openstack control plane, 1. create VM 2. ping VM 3. destroy VM | 16:14 |
spatel | every 5 minute its running. | 16:15 |
*** chkumar|ruck is now known as raukadah | 16:21 | |
admin0 | spatel, you should also create a canary in every hypervisor and monitor that canary | 16:24 |
admin0 | canary = very lightweight small instance | 16:24 |
spatel | why every hypervisor ? | 16:24 |
spatel | This script it to just check my control plane is functional, mainly rabbitMQ | 16:25 |
noonedeadpunk | spatel: well I'd rather leveraged rally for that - it can also give you SLA report | 17:15 |
noonedeadpunk | and you can define more complex tests - eventually anything supported by tempest (which is almost everything I guess) | 17:16 |
noonedeadpunk | https://rally.readthedocs.io/en/latest/overview/overview.html#use-cases | 17:18 |
noonedeadpunk | it's useful to have vm on every HV in case you do eed to check if billing is correct I guess... | 17:19 |
spatel | noonedeadpunk: let me take a look | 17:20 |
spatel | I wrote dirty script for my monitoring tool but sure will look into see how rally can fit inside my monitoring | 17:20 |
admin0 | well, if you are a public cloud and selling stuff, you want to be on top of the game .. being not reactive but proactive .. so you need to know that a customer has issue before they notify you via phone call/ticket | 17:21 |
admin0 | i come from a public cloud background . so its been a habit | 17:21 |
admin0 | so monitoring a small instance in the vm tells you that the hypervisor is good and that its networking agents are good and routing is good as well | 17:22 |
admin0 | this would be on top of your api monitoring to create, delete, instances, network, volume, attach volume also | 17:23 |
spatel | We are running private cloud so pressure is low but yes.. i do need smart way to check all components | 17:23 |
spatel | admin0: are you talking about rally | 17:23 |
admin0 | nah | 17:24 |
admin0 | terraform does it just well | 17:24 |
admin0 | let me share what i use | 17:24 |
spatel | i need something interactive, create vm, grab IP address, try to ssh or ping, etc.. | 17:25 |
spatel | terraform is good for deployment but not sure about fetching IPs and do some task | 17:25 |
spatel | noonedeadpunk: rally looks good if we can put that metrics in influx/grafana :) | 17:28 |
*** dave-mccowan has joined #openstack-ansible | 17:31 | |
admin0 | https://gist.github.com/a1git/1fddc213d030901051564fb01039a13e - i use something like this to quick-test an aio // can be expanded | 17:31 |
spatel | let me check | 17:32 |
*** gyee has joined #openstack-ansible | 17:33 | |
spatel | is this script part of your monitoring tool? | 17:34 |
spatel | like alert folks about something wrong etc.. | 17:34 |
admin0 | this is just to test .. if the output is good, means all apis are good .. we use zabbix in some, check_mk in some to monitor the canaries and do alerting | 17:35 |
admin0 | this is not a part of monitoring script .. it was just my someting to quick test a new osa install .. | 17:35 |
admin0 | but terraform is well documented to expand this | 17:35 |
spatel | That is what i did. my alert script create VM / Ping VM / Delete VM (if it fail in any step report NOC) | 17:36 |
noonedeadpunk | spatel: well I had results in prometheus, so I think you can put in influx as well | 17:36 |
spatel | we are heavily using terraform for all kind of application deployment. GCP/AWS/Tencent/AliCloud/Openstack (all these cloud providers we are running our stuff) | 17:37 |
spatel | noonedeadpunk: look like rally going to be my next goal to monitor API performance | 17:38 |
noonedeadpunk | any core around - would be greate to merge https://review.opendev.org/c/openstack/openstack-ansible-galera_client/+/765779 | 17:38 |
*** tosky has quit IRC | 17:46 | |
admin0 | kleini, https://www.openstackfaq.com/openstack-octavia/ | 17:53 |
openstackgerrit | Merged openstack/openstack-ansible-galera_client master: Deprecate openstack-ansible-galera_client role https://review.opendev.org/c/openstack/openstack-ansible-galera_client/+/765779 | 17:55 |
*** SecOpsNinja has left #openstack-ansible | 17:56 | |
jrosser | admin0: it would be better to document the right things to put in used_ips and the dhcp allowed range to avoid a conflict | 17:57 |
admin0 | i tried to work it out .. but could not | 17:58 |
admin0 | they eventually overlap | 17:58 |
noonedeadpunk | I will be deploying octavia pretty soon - will try to follow our docs and will adjust them with missing parts | 17:58 |
*** yann-kaelig has joined #openstack-ansible | 17:59 | |
jrosser | they should not, that makes it like there is a bug you have to workaround | 17:59 |
admin0 | when we have time, can we take a 10.0.0.0/24 example range and work it out | 17:59 |
jrosser | i believe you have used_ips the wrong way round | 18:00 |
jrosser | used_ips: "10.62.0.1,10.62.0.99" | 18:00 |
jrosser | ^ this says to OSA "do not use these addresses for container interfaces" | 18:01 |
admin0 | aha .. | 18:01 |
admin0 | jrosser, where do i send you a pizza ? | 18:01 |
jrosser | the clash is almost guaranteed combining tat with octavia_management_net_subnet_allocation_pools: 10.62.0.101-10.62.7.250 | 18:01 |
admin0 | i got it now | 18:01 |
*** MickyMan77 has quit IRC | 18:02 | |
jrosser | awesome :) | 18:02 |
*** jbadiapa has quit IRC | 18:03 | |
admin0 | so basically octavia_management_net_subnet_allocation_pools = used_ips :) | 18:05 |
jrosser | pretty much, yes | 18:05 |
jrosser | in other circumstances you would add your routers/firewalls/other hardare to used_ips | 18:06 |
admin0 | updated .. thanks | 18:07 |
*** dave-mccowan has quit IRC | 18:12 | |
*** dave-mccowan has joined #openstack-ansible | 18:16 | |
*** dave-mccowan has quit IRC | 18:20 | |
*** dave-mccowan has joined #openstack-ansible | 18:21 | |
*** andrewbonney has quit IRC | 18:24 | |
spatel | I have random question related DHCP. (we have 3 controller nodes means 3 DHCP in that case which one handover IP address to VM?) | 18:43 |
spatel | does dnsmasq DHCP shared lease database between 3 DHCP? | 18:44 |
*** maharg101 has quit IRC | 19:29 | |
*** mgariepy has quit IRC | 19:58 | |
*** dave-mccowan has quit IRC | 20:03 | |
*** dave-mccowan has joined #openstack-ansible | 20:07 | |
*** lemko3 has joined #openstack-ansible | 20:14 | |
*** lemko has quit IRC | 20:14 | |
*** lemko3 is now known as lemko | 20:14 | |
*** cshen has quit IRC | 20:36 | |
*** cshen has joined #openstack-ansible | 20:58 | |
*** rfolco has quit IRC | 21:05 | |
*** tosky has joined #openstack-ansible | 21:05 | |
admin0 | spatel, if you are targeting public, then don't make controllers your network node .. what we do is make compute node also network node .. in that regard, any 1 node going down does not take your whole customers down.. and in that regard, you can have N number of dhcp per network .. ( in an old non osa openstack) we found upto 5 dhcp per network .. imagine a public cloud with 10,000+ tenants, and if even if 10% is actively using it, 1000 | 21:18 |
admin0 | networks with 4-5 dhcp each = 5000 dhcp in the whole | 21:18 |
admin0 | we ran cron to calculate dhcp per network and delete > 2 | 21:18 |
admin0 | in osa you can fix that via user_variables | 21:19 |
spatel | currently we have 3 controller node and they are running DHCP service | 21:19 |
admin0 | also if public services, also offer direct-dhcp public ip and not via floating IP | 21:20 |
admin0 | that way, instances can get direct dhcp public ip .. and there is no need to create network, create router add 1:1 NAt etc | 21:20 |
admin0 | as those waste IP address | 21:20 |
admin0 | this way => https://www.openstackfaq.com/openstack-add-direct-attached-dhcp-ip/ | 21:21 |
admin0 | you have to ensure that the .1 is added to the vlan specified in the router | 21:21 |
admin0 | this is a way that is used in public clouds | 21:21 |
admin0 | and customers are also happy ( cpanel/directadmin/voip licensing) that they get a direct public ip | 21:21 |
spatel | let me understand what you trying to say | 21:23 |
admin0 | the only thing is, via direct dhcp method, unlike floating ip, if they delete the instance, the ip is also gone | 21:23 |
spatel | what is direct dhcp method? | 21:23 |
admin0 | :D | 21:23 |
admin0 | i just explained above :) | 21:23 |
admin0 | let me elaborate | 21:24 |
admin0 | ipv4 is scarce you don't get it .. so yuo want to maxmize it .. now , via floating ip .. router = .1 .. then in our HA setup .2 .3 .4 = dhcp .. then customer router = .5 .. and then his floating ip = .6 .. another customer, his router = .7, his floating ip = .8 . .. so for every router you creae, you lose an IP address | 21:25 |
admin0 | via direct dhcp method, your router is .1, the instance get direct public IP .. no need to create network or add floating ip .. | 21:25 |
admin0 | that way, in terms of scale, you are saving resources .. coz for every private network = 3 dhcp namespaces, 3 router in HA = lot of stuff in network | 21:26 |
admin0 | if you have 10,000 tenants , its 10,000 x 3 of routers and dhcp | 21:26 |
admin0 | thats huge | 21:26 |
admin0 | overhead | 21:26 |
admin0 | via the direct dhcp method, you get an IP in the instance directly .. no floating ip, no network creation, | 21:26 |
admin0 | its used in public cloud provider .. so not properly documented | 21:27 |
admin0 | i started in public cloud, so i documented it | 21:27 |
spatel | We have Physical router is my openstack gateway for all VM | 21:27 |
spatel | we don't run L3 on software layer. | 21:27 |
admin0 | you are stil creating network right ? | 21:27 |
spatel | Yes | 21:27 |
admin0 | that spawns up 3 dhcp namespaces and 3 router namespaces in HA | 21:27 |
spatel | NO | 21:28 |
admin0 | every (private network) will have a dhcp and a router to connect to ext- | 21:28 |
spatel | Oh wait.. i know what you saying.. | 21:28 |
spatel | when i create subnet it reserve 3 public IP DHCP namespace. | 21:29 |
spatel | reading your doc | 21:30 |
spatel | admin0: explain me this line (via direct dhcp method, your router is .1, the instance get direct public IP .. no need to create network or add floating ip ) | 21:31 |
admin0 | :D | 21:31 |
spatel | How my instance get direct public IP? | 21:31 |
admin0 | let me try | 21:31 |
spatel | there must be DHCP somewhere in network | 21:31 |
admin0 | you have only 1-3 dhcp process for that specific public network | 21:31 |
admin0 | let me try to simplify | 21:31 |
admin0 | in a public cloud, the typical customer is who wants to do web hosting correct -- take a vm, and run a control panel like cpanel or directadmin .. he wants a server with IP address right ? | 21:32 |
admin0 | so without direct-dhcp, you create a tenant, then as teannt create 1. network, and then 2. router, then 3. assign floating ip and 4. assign ip -> instance | 21:33 |
admin0 | the end result is .. the instance is mapped to a public ip | 21:33 |
admin0 | with direct dhcp, you don't need to create networks or router, the customer gets direct IP | 21:33 |
admin0 | bro.. just trust me .. follow my notes .. and enjoy the "magic" :D | 21:34 |
spatel | Ok i think i am following you now. | 21:34 |
admin0 | this is only done when you want to grow as public cloud provider and want to have like 10k tenants | 21:34 |
spatel | In my case i am running private cloud and all my tenants using shared subnet. | 21:35 |
admin0 | wtih 10k tenants, with 1 router reach, you are losing 1k public IP just for router, and then inviting 3x dhcp process for dhcp = 30,000 dhcp process and 30,000 routers | 21:35 |
admin0 | the end result is.. the vm gets a public iP | 21:35 |
admin0 | everything else remains the same | 21:35 |
spatel | totally agreed | 21:35 |
admin0 | exactly .. so this direct dhcp method is not documented .. outside of running openstack at scale in a public domain, not even people know or think about it | 21:36 |
spatel | instead of direct DHCP why not create subnet and shared with tenants | 21:36 |
admin0 | i think the end result is the same isn't it ? | 21:37 |
admin0 | " why not create subnet and shared with tenants " document this man :) | 21:37 |
spatel | When you create network/subnet you can tell it --shared or --private | 21:38 |
spatel | if you shared then it will be visible to all tenants and they can use it to create VM | 21:39 |
spatel | we have handful tenant and project because of private cloud. but in public it would be mess to deal | 21:39 |
admin0 | my past 3 companies were public cloud providers :) | 21:40 |
admin0 | like at scale :) | 21:40 |
*** newtim_ has quit IRC | 21:41 | |
admin0 | preparing myself for trove from monday :) | 21:41 |
admin0 | my belief is, if osa provides a project as stable, we ( operators) should be able to run it | 21:42 |
admin0 | trove -> designate -> magnum | 21:43 |
admin0 | you can give the dhcp method a try if you have an acceptane/lab env on how it works | 21:44 |
spatel | After successfully deploy senlin (my next goal is to use designate) | 21:44 |
spatel | does trove support postgresql ? | 21:44 |
admin0 | designate with osa, i already have it implemented in office | 21:44 |
admin0 | postgresql is my 1st goal | 21:44 |
admin0 | mariadb second | 21:44 |
admin0 | i think it does | 21:45 |
spatel | let me know how it goes | 21:45 |
admin0 | btw, octavia is not complete..i found out that for https, and to have letsencrypt/zerossl, it needs babrican | 21:45 |
spatel | i want to do magnum but with vlan provider where my container get direct IP from my VLAN | 21:46 |
admin0 | so have to setup babrican, get https working and can finally mark it as done | 21:46 |
admin0 | the direct dhcp method i documented is exacly what it does.. provides a direct IP from the vlan | 21:46 |
admin0 | have a great weekend everyone | 21:47 |
spatel | Thank you man!!! | 21:48 |
spatel | have a great weekend and stay safe | 21:48 |
*** spatel has quit IRC | 22:05 | |
*** dave-mccowan has quit IRC | 22:31 |
Generated by irclog2html.py 2.17.2 by Marius Gedminas - find it at https://mg.pov.lt/irclog2html/!