My deployment:
Trying to get a Metabase data visualisation) instance running on GKE.
My approach:
Deploying the two components as separate workloads (component 1 = the app - Docker image, component 2 = PostrgreSQL database, also a docker image).
Deploying the two workloads into the cluster goes smoothly, as does configuring the requisite environment variables.
But for neither love nor money, the Metabase app refuses to connect to the PostgreSQL database. It keeps choking on its boot sequence complaining that it can't establish a connection with the DB server.
I've created a cluster IP for the PostgreSQL and configured a 5432:5432 port forwarding. And I've passed its internal cluster IP on to the app as the internal DB host. No give...
Anything else worth trying in my quest to debug this? It's proving a stubborn issue to resolve!
Hello @danielrosehill,
Welcome to Google Cloud Community!
I presume you already double-checked the environment variables. Just make sure the Metabase container receives the correct environment variables for the database connection, including hostname/IP, port, username, and password. Verify these values match those configured for your PostgreSQL deployment.
Here's what you can do:
nc
or telnet
from within a running pod to test connection to the PostgreSQL service on port 5432.nc -z postgresql-service 5432 # Replace 'postgresql-service' with your PostgreSQL service name
If successful, it indicates network connectivity within the cluster.kubectl get service <your-postgresql-service>
nslookup postgresql-service
or ping postgresql-service
to see if it resolves to the correct internal IP address of the PostgreSQL service. Ensure your pods use the service name for the database hostname, not the pod name.If you find any error messages related to this issue, don't hesitate to post back any questions here.
Any chance you can post your YAMLs?