I have a project using Digitial Ocean spaces to upload and request files from a server. I’ve got a node.js server that uses the s3 sdk to generate a presigned URL that’s then used in a javascript fetch
to upload directly to spaces. This is to reduce load on our server and reduce the running costs required in proxying through our server.
This is the code making the upload request (written in Typescript)
async (uploadUrl: string, file: File) => {
const response = await fetch(uploadUrl, {
method: 'PUT',
body: file,
headers: {
'Content-Type': file.type,
},
})
return response
}
This responds with an ERR_CERT_COMMON_NAME_INVALID error. I’m using browsersync for a local dev server and imagine this is a case of setting up CORS properly to allow localhost - I’m aware of the security concern with doing that, and I will restrict CORS as soon as development is complete on this feature. I’ve added a CORS rule to allow localhost:3000 but I’m still getting this issue so would appreciate any pointers - the ideal to me would be if there was a way of setting up like a dev mode on an obscure bucket that relaxes these rules temporarily.
As I understand it, I shouldn’t need to edit the ACL as, while the bucket is private, the presigned url is being generated by a server with an api key and secret with full access, but am I wrong?
Here’s the relevant part of my server side code in case that’s relevant (it’s node.js)
import { S3Client, PutObjectCommand, GetObjectCommand } from '@aws-sdk/client-s3'
import { getSignedUrl } from '@aws-sdk/s3-request-presigner'
const client = new S3Client({
endpoint: process.env.SPACES_ENDPOINT,
region: process.env.SPACES_REGION,
forcePathStyle: false,
credentials: {
accessKeyId: process.env.SPACES_ACCESS_KEY,
secretAccessKey: process.env.SPACES_SECRET,
},
})
const getPresignedPutURL = () => {
const command = new PutObjectCommand({
Bucket: process.env.SPACES_BUCKET,
Key: key,
Metadata: { ...metadata },
})
return getSignedUrl(client, command, { expiresIn: 60 * 60 * 12 })
}
This textbox defaults to using Markdown to format your answer.
You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!
These answers are provided by our Community. If you find them useful, show some love by clicking the heart. If you run into issues leave a comment, or add your own answer to help others.
Oh wow - after days of messing around on and off with this, I made a typo when naming my bucket and every other error was a red herring… Eventually tried to hit just the domain directly (SUBDOMAIN.REGION.digitaloceanspaces.com) and got a “bucket doesn’t exist” error that keyed me onto it.
Damn. Thanks for your help anyway haha.
The error was a red herring - the issue was the URL being given. For SPACES_ENDPOINT, I was including my bucket name in the subdomain (which is what the Digital Ocean settings page spits out for ‘Origin Endpoint’, hence the confusion) - presumably it was the Spaces SSL certificate that didn’t support the double subdomain that was being spat out as the URL by the client
I’m now getting CORS errors instead but they’re at least coming from the correct URL…
Heya,
The ERR_CERT_COMMON_NAME_INVALID error typically indicates an issue with the SSL certificate. It seems like your development environment is using HTTPS, and the SSL certificate is not valid for the specified domain or subdomain.
Regarding CORS, you’ve already mentioned that you added a CORS rule to allow
localhost:3000
. Ensure that your CORS rule is correctly configured in your Digital Ocean Spaces settings. The CORS configuration should include the necessary headers and methods for your use case.https://docs.digitalocean.com/products/spaces/how-to/configure-cors/
The rule can also look like this:
A similar question was asked here:
https://www.digitalocean.com/community/questions/why-can-i-use-http-localhost-port-with-cors-in-spaces
As for the ACL, you are correct in assuming that you shouldn’t need to modify it. The presigned URL is generated with your server’s AWS API key and secret, which should have the required permissions. The ACL of the bucket itself does not impact the ability to use a presigned URL generated with appropriate credentials.
Regards