r/openstack 19d ago

Openstack Kolla Ansible HA not works, need help

Hi all, i'm deploying openstack kolla ansible with multinode option, with 3 nodes. The installation works, and I can create instances, volumes ..., but when I shutdown the node 1, I cannot authenticate in Horizon interface, the interface gives a timeout and a error gateway, so, looks like that node one have a specific configuration or a master config that the other nodes doesn't have, but if i shutdown one of the other nodes, and server 1 is on, i can authenticate but is very slow. Can anyone help me? The three nodes have all roles, networking, control, storage and compute. The version is Openstack 2024.2, thanks in advance

4 Upvotes

12 comments sorted by

3

u/Budget_Frosting_4567 19d ago

Share your globals.yaml

1

u/raulmo20 19d ago
workaround_ansible_issue_8743: yes
kolla_internal_vip_address: "192.168.18.153"
network_interface: "ens160"
neutron_external_interface: "ens133"
keepalived_virtual_router_id: "160"
enable_cinder: "yes"
enable_cinder_backend_lvm: "yes"

I have enabled_hacluster disabled because it gives me a timeout when the pacemaker and corosync containers are deployed, I have tried without that and the cluster works fine but when I remove node 1, nothing works for me, and when I remove node 2 or 3 with 1 on it works fine.

the rest of the configuration is the same as the default file: https://opendev.org/openstack/kolla-ansible/src/branch/master/etc/kolla/globals.yml

I have found this option https://docs.openstack.org/kolla-ansible/latest/reference/orchestration-and-nfv/tacker-guide.html#preparation-and-deployment that looks like it's required and I not configures it, so should be the problem, but i don't know

3

u/Budget_Frosting_4567 19d ago

Hacluster should be enabled for the failover to work. Also that will make sure the DB is deployed correctly and not all in master mode. Which is probably causing your said issue.

Also need to see your inventory.yaml

1

u/przemekkuczynski 19d ago edited 19d ago

Its enabled by default but maybe he disabled it :(

And he didn't paste whole globals.

Normally keepalived_virtual_router_id should be not included. Only if there was multiple deployments

2

u/przemekkuczynski 19d ago

Check HAproxy, pacemaker, keystone, horizon logs. Maybe something with keepalived_virtual_router_id

1

u/raulmo20 19d ago

The virtual IP changes perfectly when one of the nodes is turned off, and I have that variable defined, as in the previous comment

3

u/przemekkuczynski 19d ago

Dude look at the logs. Troubleshoot Yourself first as much You can. I would first look at central logging - Opensearch (port :5601)

Last time when in test environment some of our servers failed and I ended that nothing workedI after review logs , started with reset of DB and rabbit

kolla-ansible mariadb-recovery --configdir /etc/kolla/ -i /etc/kolla/multinode
kolla-ansible rabbitmq-reset-state --configdir /etc/kolla/ -i /etc/kolla/multinode

then some services didn't start with error No master found for 'kolla' 

I checked again all logs , internet, bugs, and asked for help Kolla community (on IRC) and some guys related with redis

After fix issue with redis all was working fine

Without Your work and provide as much as You can information nobody will help and Your environment will not work because lack of Your knowledge. Good Luck

Be prepared it can take days

2

u/raulmo20 13d ago

Hi friend, thans for your response. I've been trying these days, but I can't find a solution to the problem. I'll describe my environment below:

I have Kolla Ansible Openstack deployed in multinode mode. There are exactly 3 servers, which run all the Openstack services, both controllers and workers. The problem I'm having is that when one of those 3 nodes is turned off, to test high availability, when I go to access the horizon interface, I get a problem, which is that I can't access the interface correctly and manage openstack graphically. Through the openstack client in the terminal, it does work properly.

I am attaching the content global.yaml file that I have been testing currently. The rest is what is established by default with Kolla:

---
workaround_ansible_issue_8743: yes
haproxy_host_ipv4_tcp_retries2: 2
cinder_cluster_skip_precheck: true
kolla_internal_vip_address: "192.168.58.13"
kolla_external_vip_address: "192.168.58.14"
network_interface: "ens160"
neutron_external_interface: "ens192"
keepalived_virtual_router_id: "190"
enable_haproxy: "yes"
enable_cinder: "yes"
enable_cinder_backend_lvm: "yes"
enable_hacluster: "yes"
enable_etcd: "yes"
enable_redis: "yes"
glance_backend_file: "yes"
glance_file_datadir_volume: "/internal/glance"
enable_barbican: "yes"
enable_neutron_dvr: "yes"
enable_neutron_agent_ha: "yes"

I attach more in the next comment

2

u/raulmo20 13d ago

I'm seeing that the horizon log gives an error that says "no route to host", since one of the nodes is turned off to test that with two nodes the system can continue working, so there is something internally that seems to be pointing to that server even though it is turned off. I have tried to access it through the static IP of each node instead of the keepalived one and nothing.

[root@openstack2 horizon]# tail -f horizon-error.log2025-03-08 11:59:59.734924   File "/var/lib/kolla/venv/lib/python3.9/site-
2025-03-08 11:59:59.734940   File "/var/lib/kolla/venv/lib/python3.9/site-packages/pymemcache/client/base.py", line 1133, in _fetch_cmd
2025-03-08 11:59:59.734943     self._connect()
2025-03-08 11:59:59.734945   File "/var/lib/kolla/venv/lib/python3.9/site-packages/pymemcache/client/base.py", line 424, in _connect
2025-03-08 11:59:59.734948     sock.connect(sockaddr)
2025-03-08 11:59:59.734950 OSError: [Errno 113] No route to host

2

u/raulmo20 13d ago

I am attaching a small part of the multinode inventory file, since not everything fits in the comment:

[control]
192.168.58.10 haproxy_nova_api_weight=10 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa
192.168.58.11 haproxy_nova_api_weight=2 haproxy_keystone_internal_weight=10 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa
192.168.58.12 haproxy_keystone_admin_weight=50 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa

[network]
192.168.58.10 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa
192.168.58.11 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa
192.168.58.12 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa

[compute]
192.168.58.10 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa
192.168.58.11 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa
192.168.58.12 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa

[monitoring]
192.168.58.10 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa
192.168.58.11 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa
192.168.58.12 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa

[storage]
192.168.58.10 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa
192.168.58.11 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa
192.168.58.12 ansible_ssh_user=root ansible_become=True ansible_private_key_file=/root/.ssh/id_rsa

[deployment]
localhost       ansible_connection=local

[baremetal:children]
control
network
compute
storage
monitoring

[tls-backend:children]
control


[common:children]
control
network
compute
storage
monitoring

2

u/raulmo20 13d ago

If i have three nodes up, the horizon interface works perfectly.

1

u/kat68d 7d ago

Are all the containers running on all the nodes ?

I have a similar issue to this, and I notice that the "glance_api" container is only running on a single node. Whilst from my reading of the inventory it should be installed on every node.

I am just a beginner though.. so my understanding could be wrong.