Recently, I needed an off-line copy of some documentation, available only as
web pages. That’s how I managed to clone entire parts of websites using wget.
These are the basic arguments needed to perform the recursive download.
--recursive
Tells wget to recursively download pages, starting from the specified URL.
--level=1
Tells wget to stop after one level of recursion. This can be changed to download more deeply, or set to 0 that means “no limit”
--no-clobber
Skip downloads that would download to existing files
--page-requisites
Tells wget to download all the resources (images, css, javascript, ...) that are needed for the page to work.
--html-extension
Adds ”.html” extension to downloaded files, with the double purpose of making the browser recognize them as html files and solving naming conflicts for “generated” URLs, when there are no directories with “index.html” but just a framework that responds dynamically with generated pages.
--convert-links
After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.
--no-parent
Do not ever ascend to the parent directory when retrieving recursively.
--domains=www.example.com
Set domains to be followed. DOMAIN-LIST is a comma-separated list of domains.
Many web servers tend to limit the pages a user can download in a given amount
of time, or the user-agents that can access given pages, etc.
To avoid such limits, some extra options may be added.