Tutorial

How to Download Files with cURL

Updated on June 11, 2021
authorauthor

joshtronic and Brian Hogan

How to Download Files with cURL

Client URL, or cURL, is a library and command-line utility for transferring data between systems. It supports many protocols and tends to be installed by default on many Unix-like operating systems. Because of its general availability, it is a great choice for when you need to download a file to your local system, especially in a server environment.

In this tutorial, you’ll use the curl command to download a text file from a web server. You’ll view its contents, save it locally, and tell curl to follow redirects if files have moved.

Downloading files off of the Internet can be dangerous, so be sure you are downloading from reputable sources. In this tutorial you’ll download files from DigitalOcean, and you won’t be executing any files you download.

Step 1 — Fetching remote files

Out of the box, without any command-line arguments, the curl command will fetch a file and display its contents to the standard output.

Let’s give it a try by downloading the robots.txt file from Digitalocean.com:

  1. curl https://www.digitalocean.com/robots.txt

You’ll see the file’s contents displayed on the screen:

Output
User-agent: * Disallow: sitemap: https://www.digitalocean.com/sitemap.xml sitemap: https://www.digitalocean.com/community/main_sitemap.xml.gz sitemap: https://www.digitalocean.com/community/questions_sitemap.xml.gz sitemap: https://www.digitalocean.com/community/users_sitemap.xml.gz

Give curl a URL and it will fetch the resource and display its contents.

Saving Remote Files

Fetching a file and display its contents is all well and good, but what if you want to actually save the file to your system?

To save the remote file to your local system, with the same filename as the server you’re downloading from, add the --remote-name argument, or use the -O option:

  1. curl -O https://www.digitalocean.com/robots.txt

Your file will download:

Output
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 286 0 286 0 0 5296 0 --:--:-- --:--:-- --:--:-- 5296

Instead of displaying the contents of the file, curl displays a text-based progress meter and saves the file to the same name as the remote file’s name. You can check on things with the cat command:

  1. cat robots.txt

The file contains the same contents you saw previously:

Output
User-agent: * Disallow: sitemap: https://www.digitalocean.com/sitemap.xml sitemap: https://www.digitalocean.com/community/main_sitemap.xml.gz sitemap: https://www.digitalocean.com/community/questions_sitemap.xml.gz sitemap: https://www.digitalocean.com/community/users_sitemap.xml.gz

Now let’s look at specifying a filename for the downloaded file.

Step 2 — Saving Remote Files with a Specific File Name

You may already have a local file with the same name as the file on the remote server.

To avoid overwriting your local file of the same name, use the -o or --output argument, followed by the name of the local file you’d like to save the contents to.

Execute the following command to download the remote robots.txt file to the locally named do-bots.txt file:

  1. curl -o do-bots.txt https://www.digitalocean.com/robots.txt

Once again you’ll see the progress bar:

Output
% Total % Received % Xferd Average Speed Time Time Time Current Dload Upload Total Spent Left Speed 100 286 0 286 0 0 6975 0 --:--:-- --:--:-- --:--:-- 7150

Now use the cat command to display the contents of do-bots.txt to verify it’s the file you downloaded:

  1. cat do-bots.txt

The contents are the same:

Output
User-agent: * Disallow: sitemap: https://www.digitalocean.com/sitemap.xml sitemap: https://www.digitalocean.com/community/main_sitemap.xml.gz sitemap: https://www.digitalocean.com/community/questions_sitemap.xml.gz sitemap: https://www.digitalocean.com/community/users_sitemap.xml.gz

By default, curl doesn’t follow redirects, so when files move, you might not get what you expect. Let’s look at how to fix that.

Step 3 — Following Redirects

Thus far all of the examples have included fully qualified URLs that include the https:// protocol. If you happened to try to fetch the robots.txt file and only specified www.digitalocean.com, you would not see any output, because DigitalOcean redirects requests from http:// to https://:

You can verify this by using the -I flag, which displays the request headers rather than the contents of the file:

  1. curl -I www.digitalocean.com/robots.txt

The output shows that the URL was redirected. The first line of the output tells you that it was moved, and the Location line tells you where:

Output
HTTP/1.1 301 Moved Permanently Cache-Control: max-age=3600 Cf-Ray: 65dd51678fd93ff7-YYZ Cf-Request-Id: 0a9e3134b500003ff72b9d0000000001 Connection: keep-alive Date: Fri, 11 Jun 2021 19:41:37 GMT Expires: Fri, 11 Jun 2021 20:41:37 GMT Location: https://www.digitalocean.com/robots.txt Server: cloudflare . . .

You could use curl to make another request manually, or you can use the --location or -L argument which tells curl to redo the request to the new location whenever it encounters a redirect. Give it a try:

  1. curl -L www.digitalocean.com/robots.txt

This time you see the output, as curl followed the redirect:

Output
User-agent: * Disallow: sitemap: https://www.digitalocean.com/sitemap.xml sitemap: https://www.digitalocean.com/community/main_sitemap.xml.gz sitemap: https://www.digitalocean.com/community/questions_sitemap.xml.gz sitemap: https://www.digitalocean.com/community/users_sitemap.xml.gz

You can combine the -L argument with some of the aforementioned arguments to download the file to your local system:

  1. curl -L -o do-bots.txt www.digitalocean.com/robots.txt

Warning: Many resources online will ask you to use curl to download scripts and execute them. Before you run any scripts you have downloaded, it’s good practice to check their contents before making them executable and running them. Use the less command to review the code to ensure it’s something you want to run.

Conclusion

curl lets you quickly download files from a remote system. curl supports many different protocols and can also make more complex web requests, including interacting with remote APIs to send and receive data.

You can learn more by viewing the manual page for curl by running man curl.

Thanks for learning with the DigitalOcean Community. Check out our offerings for compute, storage, networking, and managed databases.

Learn more about our products

About the authors
Default avatar
joshtronic

author



Still looking for an answer?

Ask a questionSearch for more help

Was this helpful?
 
2 Comments


This textbox defaults to using Markdown to format your answer.

You can type !ref in this text area to quickly search our full set of tutorials, documentation & marketplace offerings and insert the link!

How can I download .mkv or .mp4 files in ubuntu bash?

Tested on Ubuntu 20.04 with Interactive terminal: Great! OK

Try DigitalOcean for free

Click below to sign up and get $200 of credit to try our products over 60 days!

Sign up

Join the Tech Talk
Success! Thank you! Please check your email for further details.

Please complete your information!

Become a contributor for community

Get paid to write technical tutorials and select a tech-focused charity to receive a matching donation.

DigitalOcean Documentation

Full documentation for every DigitalOcean product.

Resources for startups and SMBs

The Wave has everything you need to know about building a business, from raising funding to marketing your product.

Get our newsletter

Stay up to date by signing up for DigitalOcean’s Infrastructure as a Newsletter.

New accounts only. By submitting your email you agree to our Privacy Policy

The developer cloud

Scale up as you grow — whether you're running one virtual machine or ten thousand.

Get started for free

Sign up and get $200 in credit for your first 60 days with DigitalOcean.*

*This promotional offer applies to new accounts only.