GCS credentials don't work

Hi, i am testing medusa on GKE
I have created necessary iam role and provided permission and downloaded the credentials.
I have created secret in kubernetes cluster with the credentials downloaded from GCP
When i try to deploy medusa, the container fails with below error

MEDUSA_MODE = GRPC
sleeping for 0 sec
Starting Medusa gRPC service
WARNING:root:The CQL_USERNAME environment variable is deprecated and has been replaced by the MEDUSA_CQL_USERNAME variable
WARNING:root:The CQL_PASSWORD environment variable is deprecated and has been replaced by the MEDUSA_CQL_PASSWORD variable
WARNING:root:The CQL_USERNAME environment variable is deprecated and has been replaced by the MEDUSA_CQL_USERNAME variable
WARNING:root:The CQL_PASSWORD environment variable is deprecated and has been replaced by the MEDUSA_CQL_PASSWORD variable
INFO:root:Init service
[2023-07-18 15:59:08,295] INFO: Init service
DEBUG:root:Loading storage_provider: google_storage
[2023-07-18 15:59:08,295] DEBUG: Loading storage_provider: google_storage
Traceback (most recent call last):
File “/usr/lib/python3.10/runpy.py”, line 196, in _run_module_as_main
return _run_code(code, main_globals, None,
File “/usr/lib/python3.10/runpy.py”, line 86, in _run_code
exec(code, run_globals)
File “/home/cassandra/medusa/service/grpc/server.py”, line 349, in
server.serve()
File “/home/cassandra/medusa/service/grpc/server.py”, line 65, in serve
medusa_pb2_grpc.add_MedusaServicer_to_server(MedusaService(config), self.grpc_server)
File “/home/cassandra/medusa/service/grpc/server.py”, line 104, in init
self.storage = Storage(config=self.config.storage)
File “/home/cassandra/medusa/storage/init.py”, line 75, in init
self.storage_driver = self._connect_storage()
File “/home/cassandra/medusa/storage/init.py”, line 81, in _connect_storage
google_storage = GoogleStorage(self._config)
File “/home/cassandra/medusa/storage/abstract_storage.py”, line 39, in init
self.driver = self.connect_storage()
File “/home/cassandra/medusa/storage/google_storage.py”, line 39, in connect_storage
with io.open(os.path.expanduser(self.config.key_file), ‘r’, encoding=‘utf-8’) as json_fi:
FileNotFoundError: [Errno 2] No such file or directory: ‘/etc/medusa-secrets/credentials’

I have the secret available in the namespace, re-verified the name and it is correct.

I have tested a similar setup in AWS S3 and it works as expected.

Hi
What’s the secret name and filename with creds in it?
What path do is it mounted to?

Mine looks like this

# kubectl get secret medusa-bucket-key -o yaml
apiVersion: v1
kind: Secret
metadata:
  name: medusa-bucket-key
  namespace: default
type: Opaque
data:
  credentials: {GCP CREDS IN BASE64}

Thanks, it issue was due to the key, i had credentials.json instead of just credentials

Hey @kiran.linux I am also trying to perform a backup using s3 aws bucket, but facing issue with the following error message:

INFO: Using credentials CensoredCredentials(access_key_id=A…K, secret_access_key=*****, region=us-west-2)
[2023-12-22 18:42:16,152] INFO: Connecting to s3 with args {}
— Logging error —
Traceback (most recent call last):
File “/home/cassandra/.local/lib/python3.10/site-packages/medusa/storage/s3_base_storage.py”, line 310, in _stat_blob
resp = self.s3_client.head_object(Bucket=self.bucket_name, Key=object_key)
File “/home/cassandra/.local/lib/python3.10/site-packages/botocore/client.py”, line 553, in _api_call
return self._make_api_call(operation_name, kwargs)
File “/home/cassandra/.local/lib/python3.10/site-packages/botocore/client.py”, line 1009, in _make_api_call
raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden

During handling of the above exception, another exception occurred:
[2023-12-22 18:42:16,578] ERROR: An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.

Can you help me in this setup, as I can see yours is already working?