Ask questionsload balancer controller out of sync with gcp and ingress annotations
I have a use case where we need to reassign a reserved external IP from one Ingress to another. In the GCP console and CLI I am seeing that all forwarding rules, url maps, etc. are configured correctly but in
kubectl describe ing/<ingress-name> I am getting the following error
Events: Type Reason Age From Message ---- ------ ---- ---- ------- Warning Sync 6m (x28 over 55m) loadbalancer-controller Error during sync: googleapi: Error 400: Invalid value for field 'resource.IPAddress': 'XX.XXX.XXX.XXX'. Invalid IP address specified., invalid
XX.XXX.XXX.XXX is the old IP that was assigned to the Ingress prior to reconfiguring the frontend (forwarding rules in GCP). If I view the load balancer config in GCP Console I see a different IP
YY.YYY.YYY.YYY. The IP
XX.XXX.XXX.XXX is no longer reserved in GCP at all.
I have the following annotations on my kubernetes Ingress
ingress.kubernetes.io/forwarding-rule: [redacted] # maps YY.YYY.YYY.YYY:80 to the correct backends ingress.kubernetes.io/https-forwarding-rule: [redacted] # maps YY.YYY.YYY.YYY:443 to the correct backends ingress.kubernetes.io/https-target-proxy: [redacted] ingress.kubernetes.io/ssl-cert: [redacted] ingress.kubernetes.io/target-proxy: [redacted] ingress.kubernetes.io/url-map: [redacted]
Why is the IP
XX.XXX.XXX.XXX still hanging around. It seems strange that I cannot reassign an external IP to a LoadBalancer/Ingress. Any insights into this problem would be appreciated.
Answer questions one000mph
What was the motivation behind editing the annotations? Modifying them does not do anything to the LB, they are only there as a "status".
I don't think this is true in my experience. Setting that annotation does modify the ingress controller behavior. There is documentation on how to modify annotations to configure the ingress controller.
This is quite an old issue and I think what I was trying to do was probably a fringe use case that might not be officially supported (yet?). If I run into it again I'll come back to this and reopen.