I’ve got a Swift application that uses the Soto client library for S3 access. I’ve got it working fine, but I see there are a few different ways to get an object to the same place:
An example of my code is this:
func
put(data inData: Data, destPath inPath: String, contentType inContentType: String)
async
throws
{
let key = (inPath as NSString).lastPathComponent
let path = (inPath as NSString).deletingLastPathComponent
let putObjectRequest = S3.PutObjectRequest(
acl: .publicRead,
body: .data(inData),
bucket: "\(path)",
contentType: inContentType,
key: key
)
let putResult = try await self.s3?.putObject(putObjectRequest)
Self.logger.info("Uploaded file to: \(inPath)")
}
I must’ve based that off some sample code, because it rubs me the wrong way. I prefer one where bucket
is specified as the actual bucket name, and key
is the full path and filename. I’ve tried it that way, and it works.
It also works if the endpoint I specify in the S3 client configuration takes the form https://my-bucket-name.region.digitaloceanspaces.com
, and I specify an empty string for the bucket.
What’s the “right” way to do this, and does Amazon S3 work the same way?
UPDATE: It seems this is a supported behavior of Amazon S3 as well (Virtual Hosting of Buckets). For other reasons, it’s helpful in my app to make the endpoint URL as unique as possible, so I’ll be doing that, and using an empty string for the bucket
argument and the full path and filename for the key
argument.
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Hey!
The flexibility in how you specify the bucket and key can be both useful but also a bit confusing. Both Amazon S3 and DigitalOcean Spaces operate on a similar model, where:
Your method of dynamically specifying the bucket and key based on the
inPath
is quite clever and allows for flexibility. However, as you’ve observed, it might not align with conventional use where the bucket is a static, known container, and the key is the variable path to the specific object.The “right” way, especially for clarity and maintainability of your code, would be to:
Specify the bucket as the actual, unchanging name of your DigitalOcean Spaces bucket. This makes it clear which bucket you’re interacting with and is more intuitive for others reading your code or for you when you come back to it after some time.
Use the full path and filename as the key. This approach is straightforward and aligns with how most developers understand and interact with S3-compatible storage. It accurately represents the object’s location within the bucket and avoids any ambiguity.
Regarding the endpoint configuration and using an empty string for the bucket, this taps into the concept of Virtual Hosting. By specifying the bucket in the endpoint (e.g.,
https://my-bucket-name.region.digitaloceanspaces.com
), you’re directing the request to that specific bucket, allowing you to leave the bucket parameter empty when making API calls. This method works well and is supported by both Amazon S3 and DigitalOcean Spaces, especially when you have a clear separation of concerns or when using the bucket-specific endpoint can simplify configurations or improve readability.Given your update and preference for making the endpoint URL as unique as possible, using the virtual hosting approach and specifying the full path and filename for the key (with an empty string for the bucket in your API calls) is perfectly valid and aligns with supported behaviors of S3 services. This method can indeed simplify your application’s logic when dealing with multiple buckets or when you prefer to encode the bucket information within the endpoint URL itself.
Best,
Bobby