Medusa not able to connect to aws s3 bucket

When I am running the following command to backup cassandra:

medusa backup --backup-name dc2-test-backup --mode full

the following error is coming up, can anyone help with the configuration if I am doing smting wrong?

[2023-12-22 18:42:16,121] WARNING: The CQL_USERNAME environment variable is deprecated and has been replaced by the MEDUSA_CQL_USERNAME variable
[2023-12-22 18:42:16,121] WARNING: The CQL_PASSWORD environment variable is deprecated and has been replaced by the MEDUSA_CQL_PASSWORD variable
[2023-12-22 18:42:16,142] INFO: Registered backup id dc2-test-backup
[2023-12-22 18:42:16,142] INFO: Monitoring provider is noop
[2023-12-22 18:42:16,152] INFO: Using credentials CensoredCredentials(access_key_id=A..K, secret_access_key=*****, region=us-west-2)
[2023-12-22 18:42:16,152] INFO: Connecting to s3 with args {}
--- Logging error ---
Traceback (most recent call last):
  File "/home/cassandra/.local/lib/python3.10/site-packages/medusa/storage/", line 310, in _stat_blob
    resp = self.s3_client.head_object(Bucket=self.bucket_name, Key=object_key)
  File "/home/cassandra/.local/lib/python3.10/site-packages/botocore/", line 553, in _api_call
    return self._make_api_call(operation_name, kwargs)
  File "/home/cassandra/.local/lib/python3.10/site-packages/botocore/", line 1009, in _make_api_call
    raise error_class(parsed_response, operation_name)
botocore.exceptions.ClientError: An error occurred (403) when calling the HeadObject operation: Forbidden

During handling of the above exception, another exception occurred:

[2023-12-22 18:42:16,297] ERROR: Error getting object from s3://nova-medusa-operator-dev/dc2/10-xxx-x-240.prometheus-ethtool-exporter.monitoring.svc.cluster.local/dc2-test-backup/meta/schema.cql
[2023-12-22 18:42:16,297] INFO: Starting backup using Stagger: None Mode: full Name: dc2-test-backup
[2023-12-22 18:42:16,297] INFO: Updated from existing status: -1 to new status: 0 for backup id: dc2-test-backup
[2023-12-22 18:42:16,298] INFO: Saving tokenmap and schema
[2023-12-22 18:42:16,578] ERROR: An error occurred (InvalidAccessKeyId) when calling the PutObject operation: The AWS Access Key Id you provided does not exist in our records.

Hi @Sandeep_Singh,

you don’t seem to be running Medusa in K8ssandra or Kubernetes, are you?
Based on the traces you shared, you’re either not passing credentials or not using the right credentials to access your bucket.
Did you set the aws key/secret pair in a file/secret and correctly pointed to it in your Medusa settings?

@alexander When running medusa on Kubernetes it’s showing the same error I posted. So I do have a question that:
We have this k8ssandra-operator & cluster installed in cluster, and I want to use s3 bucket for storage of backups. What should I create a IAM Role or IAM User?

k get pods | grep k8ssandra                                                                             ─╯
k8ssandra-cass-nova-operator-798ff8cd98-dvxrq                     1/1     Running            0                 13d
k8ssandra-cass-operator-6dd84f6794-wrt48                          1/1     Running            0                 10d
k8ssandra-cluster-dc2-medusa-standalone-5df44c76d8-dfnsg          1/1     Running            0                 10d
k8ssandra-cluster-dc2-r1-sts-0                                    2/3     Running            0                 5d7h
k8ssandra-cluster-dc2-r2-sts-0                                    3/3     Running            0                 10d
k8ssandra-cluster-dc2-r3-sts-0                                    3/3     Running            0                 10d
k8ssandra-perf-cass-nova-operator-5475fcccc-4kmjx                 1/1     Running            1 (5d7h ago)      15d
k8ssandra-v2-operator-698b4d5746-5fb9l                            1/1     Running            6 (10d ago)       10d

also this is my k8ssandracluster medusa config:

          bucketName: nova-medusa-operator-dev
          concurrentTransfers: 1
          api_profile: default
          host: ""
          port: 443
          maxBackupAge: 1
          maxBackupCount: 2
          multiPartUploadThreshold: 10000000000000000
          prefix: dc2
          storageProvider: s3
            name: medusa-bucket-key
          region: us-west-2

For now you’ll need to create an IAM user and put its credentials (key/secret pair) in the medusa-bucket-key secret.
You’ll find the information on how to set this up here.

The community is working on adding proper support for IAM roles here and here.

It should land in our next release.

1 Like