wget 中文man页面

系统
GNU Wget是一个用于在Web上下载文件的非交互式免费工具,支持HTTP,HTTPS和FTP协议,以及HTTP代理的方式。

名称

wget - 非交互式网络下载工具

总览

wget [选项]… [URL]…

描述

GNU Wget是一个用于在Web上下载文件的非交互式免费工具,支持HTTP,HTTPS和FTP协议,以及HTTP代理的方式。

(其余暂缺)

选项

启动:

-V, –version 显示 Wget 的版本并且退出。

-h, –help 打印此帮助。

-b, -background 启动后进入后台操作。

-e, -execute=COMMAND 运行‘.wgetrc’形式的命令。

日志记录及输入文件:

-o, –output-file=文件 将日志消息写入到指定文件中。

-a, –append-output=文件 将日志消息追加到指定文件的末端。

-d, –debug 打印调试输出。

-q, –quiet 安静模式(不输出信息)。

-v, –verbose 详细输出模式(默认)。

-nv, –non-verbose 关闭详细输出模式,但不进入安静模式。

-i, –input-file=文件 下载从指定文件中找到的 URL。

-F, –force-html 以 HTML 方式处理输入文件。

-B, –base=URL 使用 -F -i 文件选项时,在相对链接前添加指定的 URL。

下载:

-t, –tries=次数 配置重试次数(0 表示无限)。

–retry-connrefused 即使拒绝连接也重试。

-O –output-document=文件 将数据写入此文件中。

-nc, –no-clobber 不更改已经存在的文件,也不使用在文件名后

添加 .#(# 为数字)的方法写入新的文件。

-c, –continue 继续接收已下载了一部分的文件。

–progress=方式 选择下载进度的表示方式。

-N, –timestamping 除非远程文件较新,否则不再取回。

-S, –server-response 显示服务器回应消息。

–spider 不下载任何数据。

-T, –timeout=秒数 配置读取数据的超时时间 (秒数)。

-w, –wait=秒数 接收不同文件之间等待的秒数。

–waitretry=秒数 在每次重试之间稍等一段时间 (由 1 秒至指定的 秒数不等)。

–random-wait 接收不同文件之间稍等一段时间(由 0 秒至 2*WAIT 秒不等)。

-Y, –proxy=on/off 打开或关闭代理服务器。

-Q, –quota=大小 配置接收数据的限额大小。

–bind-address=地址 使用本机的指定地址 (主机名称或 IP) 进行连接。

–limit-rate=速率 限制下载的速率。

–dns-cache=off 禁止查找存于高速缓存中的 DNS。

–restrict-file-names=OS 限制文件名中的字符为指定的 OS (操作系统) 所允许的字符。

目录:

-nd –no-directories 不创建目录。

-x, –force-directories 强制创建目录。

-nH, –no-host-directories 不创建含有远程主机名称的目录。

-P, –directory-prefix=名称 保存文件前先创建指定名称的目录。

–cut-dirs=数目 忽略远程目录中指定数目的目录层。

HTTP 选项:

–http-user=用户 配置 http 用户名。

–http-passwd=密码 配置 http 用户密码。

-C, –cache=on/off (不)使用服务器中的高速缓存中的数据 (默认是使用的)。

-E, –html-extension 将所有 MIME 类型为 text/html 的文件都加上 .html 扩展文件名。

–ignore-length 忽略“Content-Length”文件头字段。

–header=字符串 在文件头中添加指定字符串。

–proxy-user=用户 配置代理服务器用户名。

–proxy-passwd=密码 配置代理服务器用户密码。

–referer=URL 在 HTTP 请求中包含“Referer:URL”头。

-s, –save-headers 将 HTTP 头存入文件。

-U, –user-agent=AGENT 标志为 AGENT 而不是 Wget/VERSION。

–no-http-keep-alive 禁用 HTTP keep-alive(持久性连接)。

–cookies=off 禁用 cookie。

–load-cookies=文件 会话开始前由指定文件载入 cookie。

–save-cookies=文件 会话结束后将 cookie 保存至指定文件。

–post-data=字符串 使用 POST 方法,发送指定字符串。

–post-file=文件 使用 POST 方法,发送指定文件中的内容。

HTTPS (SSL) 选项:

–sslcertfile=文件 可选的客户段端证书。

–sslcertkey=密钥文件 对此证书可选的“密钥文件”。

–egd-file=文件 EGD socket 文件名。

–sslcadir=目录 CA 散列表所在的目录。

–sslcafile=文件 包含 CA 的文件。

–sslcerttype=0/1 Client-Cert 类型 0=PEM (默认) / 1=ASN1 (DER)

–sslcheckcert=0/1 根据提供的 CA 检查服务器的证书

–sslprotocol=0-3 选择 SSL 协议;0=自动选择,

1=SSLv2 2=SSLv3 3=TLSv1

FTP 选项:

-nr, –dont-remove-listing 不删除“.listing”文件。

-g, –glob=on/off 设置是否展开有通配符的文件名。

–passive-ftp 使用“被动”传输模式。

–retr-symlinks 在递归模式中,下载链接所指示的文件(连至目录

则例外)。

递归下载:

-r, –recursive 递归下载。

-l, –level=数字 最大递归深度(inf 或 0 表示无限)。

–delete-after 删除下载后的文件。

-k, –convert-links 将绝对链接转换为相对链接。

-K, –backup-converted 转换文件 X 前先将其备份为 X.orig。

-m, –mirror 等效于 -r -N -l inf -nr 的选项。

-p, –page-requisites 下载所有显示完整网页所需的文件,例如图像。

–strict-comments 打开对 HTML 备注的严格(SGML)处理选项。

递归下载时有关接受/拒绝的选项:

-A, –accept=列表 接受的文件样式列表,以逗号分隔。

-R, –reject=列表 排除的文件样式列表,以逗号分隔。

-D, –domains=列表 接受的域列表,以逗号分隔。

–exclude-domains=列表 排除的域列表,以逗号分隔。

–follow-ftp 跟随 HTML 文件中的 FTP 链接。

–follow-tags=列表 要跟随的 HTML 标记,以逗号分隔。

-G, –ignore-tags=列表 要忽略的 HTML 标记,以逗号分隔。

-H, –span-hosts 递归时可进入其它主机。

-L, –relative 只跟随相对链接。

-I, –include-directories=列表 要下载的目录列表。

-X, –exclude-directories=列表 要排除的目录列表。

-np, –no-parent 不搜索上层目录。 

#p#

NAME

Wget - The non-interactive network downloader.  

SYNOPSIS

wget [option]... [URL]...  

DESCRIPTION

GNU Wget is a free utility for non-interactive download of files from the Web. It supports HTTP, HTTPS, and FTP protocols, as well as retrieval through HTTP proxies.

Wget is non-interactive, meaning that it can work in the background, while the user is not logged on. This allows you to start a retrieval and disconnect from the system, letting Wget finish the work. By contrast, most of the Web browsers require constant user's presence, which can be a great hindrance when transferring a lot of data.

Wget can follow links in HTML and XHTML pages and create local versions of remote web sites, fully recreating the directory structure of the original site. This is sometimes referred to as ``recursive downloading.'' While doing that, Wget respects the Robot Exclusion Standard (/robots.txt). Wget can be instructed to convert the links in downloaded HTML files to the local files for offline viewing.

Wget has been designed for robustness over slow or unstable network connections; if a download fails due to a network problem, it will keep retrying until the whole file has been retrieved. If the server supports regetting, it will instruct the server to continue the download from where it left off.  

OPTIONS

 

Basic Startup Options

-V
--version
Display the version of Wget.
-h
--help
Print a help message describing all of Wget's command-line options.
-b
--background
Go to background immediately after startup. If no output file is specified via the -o, output is redirected to wget-log.
-e command
--execute command
Execute command as if it were a part of .wgetrc. A command thus invoked will be executed after the commands in.wgetrc, thus taking precedence over them. If you need to specify more than one wgetrc command, use multiple instances of -e.
 

Logging and Input File Options

-o logfile
--output-file=logfile
Log all messages to logfile. The messages are normally reported to standard error.
-a logfile
--append-output=logfile
Append to logfile. This is the same as -o, only it appends to logfile instead of overwriting the old log file. If logfile does not exist, a new file is created.
-d
--debug
Turn on debug output, meaning various information important to the developers of Wget if it does not work properly. Your system administrator may have chosen to compile Wget without debug support, in which case -d will not work. Please note that compiling with debug support is always safe---Wget compiled with the debug support will not print any debug info unless requested with -d.
-q
--quiet
Turn off Wget's output.
-v
--verbose
Turn on verbose output, with all the available data. The default output is verbose.
-nv
--non-verbose
Non-verbose output---turn off verbose without being completely quiet (use -q for that), which means that error messages and basic information still get printed.
-i file
--input-file=file
Read URLs from file, in which case no URLs need to be on the command line. If there are URLs both on the command line and in an input file, those on the command lines will be the first ones to be retrieved. The file need not be an HTMLdocument (but no harm if it is)---it is enough if the URLs are just listed sequentially.

However, if you specify --force-html, the document will be regarded as html. In that case you may have problems with relative links, which you can solve either by adding "<base href="url">" to the documents or by specifying --base=url on the command line.

-F
--force-html
When input is read from a file, force it to be treated as an HTML file. This enables you to retrieve relative links from existingHTML files on your local disk, by adding "<base href="url">" to HTML, or using the --base command-line option.
-B URL
--base=URL
When used in conjunction with -F, prepends URL to relative links in the file specified by -i.
 

Download Options

--bind-address=ADDRESS
When making client TCP/IP connections, "bind()" to ADDRESS on the local machine. ADDRESS may be specified as a hostname or IP address. This option can be useful if your machine is bound to multiple IPs.
-t number
--tries=number
Set number of retries to number. Specify 0 or inf for infinite retrying. The default is to retry 20 times, with the exception of fatal errors like ``connection refused'' or ``not found'' (404), which are not retried.
-O file
--output-document=file
The documents will not be written to the appropriate files, but all will be concatenated together and written to file. If filealready exists, it will be overwritten. If the file is -, the documents will be written to standard output.
-nc
--no-clobber
If a file is downloaded more than once in the same directory, Wget's behavior depends on a few options, including -nc. In certain cases, the local file will be clobbered, or overwritten, upon repeated download. In other cases it will be preserved.

When running Wget without -N-nc, or -r, downloading the same file in the same directory will result in the original copy offile being preserved and the second copy being named file.1. If that file is downloaded yet again, the third copy will be named file.2, and so on. When -nc is specified, this behavior is suppressed, and Wget will refuse to download newer copies of file. Therefore, ``"no-clobber"'' is actually a misnomer in this mode---it's not clobbering that's prevented (as the numeric suffixes were already preventing clobbering), but rather the multiple version saving that's prevented.

When running Wget with -r, but without -N or -nc, re-downloading a file will result in the new copy simply overwriting the old. Adding -nc will prevent this behavior, instead causing the original version to be preserved and any newer copies on the server to be ignored.

When running Wget with -N, with or without -r, the decision as to whether or not to download a newer copy of a file depends on the local and remote timestamp and size of the file. -nc may not be specified at the same time as -N.

Note that when -nc is specified, files with the suffixes .html or .htm will be loaded from the local disk and parsed as if they had been retrieved from the Web.

-c
--continue
Continue getting a partially-downloaded file. This is useful when you want to finish up a download started by a previous instance of Wget, or by another program. For instance:

 

        wget -c ftp://sunsite.doc.ic.ac.uk/ls-lR.Z

If there is a file named ls-lR.Z in the current directory, Wget will assume that it is the first portion of the remote file, and will ask the server to continue the retrieval from an offset equal to the length of the local file.

Note that you don't need to specify this option if you just want the current invocation of Wget to retry downloading a file should the connection be lost midway through. This is the default behavior. -c only affects resumption of downloads startedprior to this invocation of Wget, and whose local files are still sitting around.

Without -c, the previous example would just download the remote file to ls-lR.Z.1, leaving the truncated ls-lR.Z file alone.

Beginning with Wget 1.7, if you use -c on a non-empty file, and it turns out that the server does not support continued downloading, Wget will refuse to start the download from scratch, which would effectively ruin existing contents. If you really want the download to start from scratch, remove the file.

Also beginning with Wget 1.7, if you use -c on a file which is of equal size as the one on the server, Wget will refuse to download the file and print an explanatory message. The same happens when the file is smaller on the server than locally (presumably because it was changed on the server since your last download attempt)---because ``continuing'' is not meaningful, no download occurs.

On the other side of the coin, while using -c, any file that's bigger on the server than locally will be considered an incomplete download and only "(length(remote) - length(local))" bytes will be downloaded and tacked onto the end of the local file. This behavior can be desirable in certain cases---for instance, you can use wget -c to download just the new portion that's been appended to a data collection or log file.

However, if the file is bigger on the server because it's been changed, as opposed to just appended to, you'll end up with a garbled file. Wget has no way of verifying that the local file is really a valid prefix of the remote file. You need to be especially careful of this when using -c in conjunction with -r, since every file will be considered as an ``incomplete download'' candidate.

Another instance where you'll get a garbled file if you try to use -c is if you have a lame HTTP proxy that inserts a ``transfer interrupted'' string into the local file. In the future a ``rollback'' option may be added to deal with this case.

Note that -c only works with FTP servers and with HTTP servers that support the "Range" header.

--progress=type
Select the type of the progress indicator you wish to use. Legal indicators are ``dot'' and ``bar''.

The ``bar'' indicator is used by default. It draws an ASCII progress bar graphics (a.k.a ``thermometer'' display) indicating the status of retrieval. If the output is not a TTY, the ``dot'' bar will be used by default.

Use --progress=dot to switch to the ``dot'' display. It traces the retrieval by printing dots on the screen, each dot representing a fixed amount of downloaded data.

When using the dotted retrieval, you may also set the style by specifying the type as dot:style. Different styles assign different meaning to one dot. With the "default" style each dot represents 1K, there are ten dots in a cluster and 50 dots in a line. The "binary" style has a more ``computer''-like orientation---8K dots, 16-dots clusters and 48 dots per line (which makes for 384K lines). The "mega" style is suitable for downloading very large files---each dot represents 64K retrieved, there are eight dots in a cluster, and 48 dots on each line (so each line contains 3M).

Note that you can set the default style using the "progress" command in .wgetrc. That setting may be overridden from the command line. The exception is that, when the output is not a TTY, the ``dot'' progress will be favored over ``bar''. To force the bar output, use --progress=bar:force.

-N
--timestamping
Turn on time-stamping.
-S
--server-response
Print the headers sent by HTTP servers and responses sent by FTP servers.
--spider
When invoked with this option, Wget will behave as a Web spider, which means that it will not download the pages, just check that they are there. For example, you can use Wget to check your bookmarks:

 

        wget --spider --force-html -i bookmarks.html

This feature needs much more work for Wget to get close to the functionality of real web spiders.

-T seconds
--timeout=seconds
Set the network timeout to seconds seconds. This is equivalent to specifying --dns-timeout--connect-timeout, and --read-timeout, all at the same time.

Whenever Wget connects to or reads from a remote host, it checks for a timeout and aborts the operation if the time expires. This prevents anomalous occurrences such as hanging reads or infinite connects. The only timeout enabled by default is a 900-second timeout for reading. Setting timeout to 0 disables checking for timeouts.

Unless you know what you are doing, it is best not to set any of the timeout-related options.

--dns-timeout=seconds
Set the DNS lookup timeout to seconds seconds. DNS lookups that don't complete within the specified time will fail. By default, there is no timeout on DNS lookups, other than that implemented by system libraries.
--connect-timeout=seconds
Set the connect timeout to seconds seconds. TCP connections that take longer to establish will be aborted. By default, there is no connect timeout, other than that implemented by system libraries.
--read-timeout=seconds
Set the read (and write) timeout to seconds seconds. Reads that take longer will fail. The default value for read timeout is 900 seconds.
--limit-rate=amount
Limit the download speed to amount bytes per second. Amount may be expressed in bytes, kilobytes with the k suffix, or megabytes with the m suffix. For example, --limit-rate=20k will limit the retrieval rate to 20KB/s. This kind of thing is useful when, for whatever reason, you don't want Wget to consume the entire available bandwidth.

Note that Wget implements the limiting by sleeping the appropriate amount of time after a network read that took less time than specified by the rate. Eventually this strategy causes the TCP transfer to slow down to approximately the specified rate. However, it may take some time for this balance to be achieved, so don't be surprised if limiting the rate doesn't work well with very small files.

-w seconds
--wait=seconds
Wait the specified number of seconds between the retrievals. Use of this option is recommended, as it lightens the server load by making the requests less frequent. Instead of in seconds, the time can be specified in minutes using the "m" suffix, in hours using "h" suffix, or in days using "d" suffix.

Specifying a large value for this option is useful if the network or the destination host is down, so that Wget can wait long enough to reasonably expect the network error to be fixed before the retry.

--waitretry=seconds
If you don't want Wget to wait between every retrieval, but only between retries of failed downloads, you can use this option. Wget will use linear backoff, waiting 1 second after the first failure on a given file, then waiting 2 seconds after the second failure on that file, up to the maximum number of seconds you specify. Therefore, a value of 10 will actually make Wget wait up to (1 + 2 + ... + 10) = 55 seconds per file.

Note that this option is turned on by default in the global wgetrc file.

--random-wait
Some web sites may perform log analysis to identify retrieval programs such as Wget by looking for statistically significant similarities in the time between requests. This option causes the time between requests to vary between 0 and 2 * waitseconds, where wait was specified using the --wait option, in order to mask Wget's presence from such analysis.

A recent article in a publication devoted to development on a popular consumer platform provided code to perform this analysis on the fly. Its author suggested blocking at the class C address level to ensure automated retrieval programs were blocked despite changing DHCP-supplied addresses.

The --random-wait option was inspired by this ill-advised recommendation to block many unrelated users from a web site due to the actions of one.

-Y on/off
--proxy=on/off
Turn proxy support on or off. The proxy is on by default if the appropriate environment variable is defined.

For more information about the use of proxies with Wget,

-Q quota
--quota=quota
Specify download quota for automatic retrievals. The value can be specified in bytes (default), kilobytes (with k suffix), or megabytes (with m suffix).

Note that quota will never affect downloading a single file. So if you specify wget -Q10k ftp://wuarchive.wustl.edu/ls-lR.gz, all of the ls-lR.gz will be downloaded. The same goes even when several URLs are specified on the command-line. However, quota is respected when retrieving either recursively, or from an input file. Thus you may safely type wget -Q2m -i sites---download will be aborted when the quota is exceeded.

Setting quota to 0 or to inf unlimits the download quota.

--dns-cache=off
Turn off caching of DNS lookups. Normally, Wget remembers the addresses it looked up from DNS so it doesn't have to repeatedly contact the DNS server for the same (typically small) set of addresses it retrieves from. This cache exists in memory only; a new Wget run will contact DNS again.

However, in some cases it is not desirable to cache host names, even for the duration of a short-running application like Wget. For example, some HTTP servers are hosted on machines with dynamically allocated IP addresses that change from time to time. Their DNS entries are updated along with each change. When Wget's download from such a host gets interrupted by IP address change, Wget retries the download, but (due to DNS caching) it contacts the old address. With the DNS cache turned off, Wget will repeat the DNS lookup for every connect and will thus get the correct dynamic address every time---at the cost of additional DNS lookups where they're probably not needed.

If you don't understand the above description, you probably won't need this option.

--restrict-file-names=mode
Change which characters found in remote URLs may show up in local file names generated from those URLs. Characters that are restricted by this option are escaped, i.e. replaced with %HH, where HH is the hexadecimal number that corresponds to the restricted character.

By default, Wget escapes the characters that are not valid as part of file names on your operating system, as well as control characters that are typically unprintable. This option is useful for changing these defaults, either because you are downloading to a non-native partition, or because you want to disable escaping of the control characters.

When mode is set to ``unix'', Wget escapes the character / and the control characters in the ranges 0--31 and 128--159. This is the default on Unix-like OS'es.

When mode is set to ``windows'', Wget escapes the characters \|/:?"*<>, and the control characters in the ranges 0--31 and 128--159. In addition to this, Wget in Windows mode uses + instead of : to separate host and port in local file names, and uses @ instead of ? to separate the query portion of the file name from the rest. Therefore, a URLthat would be saved as www.xemacs.org:4300/search.pl?input=blah in Unix mode would be saved aswww.xemacs.org+4300/search.pl@input=blah in Windows mode. This mode is the default on Windows.

If you append ,nocontrol to the mode, as in unix,nocontrol, escaping of the control characters is also switched off. You can use --restrict-file-names=nocontrol to turn off escaping of control characters without affecting the choice of the OSto use as file name restriction mode.

 

Directory Options

-nd
--no-directories
Do not create a hierarchy of directories when retrieving recursively. With this option turned on, all files will get saved to the current directory, without clobbering (if a name shows up more than once, the filenames will get extensions .n).
-x
--force-directories
The opposite of -nd---create a hierarchy of directories, even if one would not have been created otherwise. E.g. wget -xhttp://fly.srk.fer.hr/robots.txt will save the downloaded file to fly.srk.fer.hr/robots.txt.
-nH
--no-host-directories
Disable generation of host-prefixed directories. By default, invoking Wget with -r http://fly.srk.fer.hr/ will create a structure of directories beginning with fly.srk.fer.hr/. This option disables such behavior.
--protocol-directories
Use the protocol name as a directory component of local file names. For example, with this option, wget -r http://host will save to http/host/... rather than just to host/....

Disable generation of host-prefixed directories. By default, invoking Wget with -r http://fly.srk.fer.hr/ will create a structure of directories beginning with fly.srk.fer.hr/. This option disables such behavior.

--cut-dirs=number
Ignore number directory components. This is useful for getting a fine-grained control over the directory where recursive retrieval will be saved.

Take, for example, the directory at ftp://ftp.xemacs.org/pub/xemacs/. If you retrieve it with -r, it will be saved locally under ftp.xemacs.org/pub/xemacs/. While the -nH option can remove the ftp.xemacs.org/ part, you are still stuck withpub/xemacs. This is where --cut-dirs comes in handy; it makes Wget not ``see'' number remote directory components. Here are several examples of how --cut-dirs option works.

 

        No options        -> ftp.xemacs.org/pub/xemacs/
        -nH               -> pub/xemacs/
        -nH --cut-dirs=1  -> xemacs/
        -nH --cut-dirs=2  -> .

 

        --cut-dirs=1      -> ftp.xemacs.org/xemacs/
        ...

If you just want to get rid of the directory structure, this option is similar to a combination of -nd and -P. However, unlike -nd--cut-dirs does not lose with subdirectories---for instance, with -nH --cut-dirs=1, a beta/ subdirectory will be placed toxemacs/beta, as one would expect.

-P prefix
--directory-prefix=prefix
Set directory prefix to prefix. The directory prefix is the directory where all other files and subdirectories will be saved to, i.e. the top of the retrieval tree. The default is . (the current directory).
 

HTTP Options

-E
--html-extension
If a file of type application/xhtml+xml or text/html is downloaded and the URL does not end with the regexp \.[Hh][Tt][Mm][Ll]?, this option will cause the suffix .html to be appended to the local filename. This is useful, for instance, when you're mirroring a remote site that uses .asp pages, but you want the mirrored pages to be viewable on your stock Apache server. Another good use for this is when you're downloading CGI-generated materials. A URL likehttp://site.com/article.cgi?25 will be saved as article.cgi?25.html.

Note that filenames changed in this way will be re-downloaded every time you re-mirror a site, because Wget can't tell that the local X.html file corresponds to remote URL X (since it doesn't yet know that the URL produces output of typetext/html or application/xhtml+xml. To prevent this re-downloading, you must use -k and -K so that the original version of the file will be saved as X.orig.

--http-user=user
--http-passwd=password
Specify the username user and password password on an HTTP server. According to the type of the challenge, Wget will encode them using either the "basic" (insecure) or the "digest" authentication scheme.

Another way to specify username and password is in the URL itself. Either method reveals your password to anyone who bothers to run "ps". To prevent the passwords from being seen, store them in .wgetrc or .netrc, and make sure to protect those files from other users with "chmod". If the passwords are really important, do not leave them lying in those files either---edit the files and delete them after Wget has started the download.

For more information about security issues with Wget,

--no-cache
Disable server-side cache. In this case, Wget will send the remote server an appropriate directive (Pragma: no-cache) to get the file from the remote service, rather than returning the cached version. This is especially useful for retrieving and flushing out-of-date documents on proxy servers.

Caching is allowed by default.

--no-cookies
Disable the use of cookies. Cookies are a mechanism for maintaining server-side state. The server sends the client a cookie using the "Set-Cookie" header, and the client responds with the same cookie upon further requests. Since cookies allow the server owners to keep track of visitors and for sites to exchange this information, some consider them a breach of privacy. The default is to use cookies; however, storing cookies is not on by default.
--load-cookies file
Load cookies from file before the first HTTP retrieval. file is a textual file in the format originally used by Netscape'scookies.txt file.

You will typically use this option when mirroring sites that require that you be logged in to access some or all of their content. The login process typically works by the web server issuing an HTTP cookie upon receiving and verifying your credentials. The cookie is then resent by the browser when accessing that part of the site, and so proves your identity.

Mirroring such a site requires Wget to send the same cookies your browser sends when communicating with the site. This is achieved by --load-cookies---simply point Wget to the location of the cookies.txt file, and it will send the same cookies your browser would send in the same situation. Different browsers keep textual cookie files in different locations:

<Netscape 4.x.>
The cookies are in ~/.netscape/cookies.txt.
<Mozilla and Netscape 6.x.>
Mozilla's cookie file is also named cookies.txt, located somewhere under ~/.mozilla, in the directory of your profile. The full path usually ends up looking somewhat like ~/.mozilla/default/some-weird-string/cookies.txt.
<Internet Explorer.>
You can produce a cookie file Wget can use by using the File menu, Import and Export, Export Cookies. This has been tested with Internet Explorer 5; it is not guaranteed to work with earlier versions.
<Other browsers.>
If you are using a different browser to create your cookies, --load-cookies will only work if you can locate or produce a cookie file in the Netscape format that Wget expects.

If you cannot use --load-cookies, there might still be an alternative. If your browser supports a ``cookie manager'', you can use it to view the cookies used when accessing the site you're mirroring. Write down the name and value of the cookie, and manually instruct Wget to send those cookies, bypassing the ``official'' cookie support:

 

        wget --cookies=off --header "Cookie: <name>=<value>"

--save-cookies file
Save cookies to file before exiting. This will not save cookies that have expired or that have no expiry time (so-called ``session cookies''), but also see --keep-session-cookies.
--keep-session-cookies
When specified, causes --save-cookies to also save session cookies. Session cookies are normally not save because they are supposed to be forgotten when you exit the browser. Saving them is useful on sites that require you to log in or to visit the home page before you can access some pages. With this option, multiple Wget runs are considered a single browser session as far as the site is concerned.

Since the cookie file format does not normally carry session cookies, Wget marks them with an expiry timestamp of 0. Wget's --load-cookies recognizes those as session cookies, but it might confuse other browsers. Also note that cookies so loaded will be treated as other session cookies, which means that if you want --save-cookies to preserve them again, you must use --keep-session-cookies again.

--ignore-length
Unfortunately, some HTTP servers (CGI programs, to be more precise) send out bogus "Content-Length" headers, which makes Wget go wild, as it thinks not all the document was retrieved. You can spot this syndrome if Wget retries getting the same document again and again, each time claiming that the (otherwise normal) connection has closed on the very same byte.

With this option, Wget will ignore the "Content-Length" header---as if it never existed.

--header=additional-header
Define an additional-header to be passed to the HTTP servers. Headers must contain a : preceded by one or more non-blank characters, and must not contain newlines.

You may define more than one additional header by specifying --header more than once.

 

        wget --header='Accept-Charset: iso-8859-2' \
             --header='Accept-Language: hr'        \
               http://fly.srk.fer.hr/

Specification of an empty string as the header value will clear all previous user-defined headers.

--proxy-user=user
--proxy-passwd=password
Specify the username user and password password for authentication on a proxy server. Wget will encode them using the"basic" authentication scheme.

Security considerations similar to those with --http-passwd pertain here as well.

--referer=url
Include `Referer: url' header in HTTP request. Useful for retrieving documents with server-side processing that assume they are always being retrieved by interactive web browsers and only come out properly when Referer is set to one of the pages that point to them.
--save-headers
Save the headers sent by the HTTP server to the file, preceding the actual contents, with an empty line as the separator.
-U agent-string
--user-agent=agent-string
Identify as agent-string to the HTTP server.

The HTTP protocol allows the clients to identify themselves using a "User-Agent" header field. This enables distinguishing the WWW software, usually for statistical purposes or for tracing of protocol violations. Wget normally identifies asWget/versionversion being the current version number of Wget.

However, some sites have been known to impose the policy of tailoring the output according to the "User-Agent"-supplied information. While conceptually this is not such a bad idea, it has been abused by servers denying information to clients other than "Mozilla" or Microsoft "Internet Explorer". This option allows you to change the "User-Agent" line issued by Wget. Use of this option is discouraged, unless you really know what you are doing.

--post-data=string
--post-file=file
Use POST as the method for all HTTP requests and send the specified data in the request body. "--post-data" sendsstring as data, whereas "--post-file" sends the contents of file. Other than that, they work in exactly the same way.

Please be aware that Wget needs to know the size of the POST data in advance. Therefore the argument to "--post-file"must be a regular file; specifying a FIFO or something like /dev/stdin won't work. It's not quite clear how to work around this limitation inherent in HTTP/1.0. Although HTTP/1.1 introduces chunked transfer that doesn't require knowing the request length in advance, a client can't use chunked unless it knows it's talking to an HTTP/1.1 server. And it can't know that until it receives a response, which in turn requires the request to have been completed --- a chicken-and-egg problem.

Note: if Wget is redirected after the POST request is completed, it will not send the POST data to the redirected URL. This is because URLs that process POST often respond with a redirection to a regular page (although that's technically disallowed), which does not desire or accept POST. It is not yet clear that this behavior is optimal; if it doesn't work out, it will be changed.

This example shows how to log to a server using POST and then proceed to download the desired pages, presumably only accessible to authorized users:

 

        # Log in to the server.  This can be done only once.
        wget --save-cookies cookies.txt \
             --post-data 'user=foo&password=bar' \
             http://server.com/auth.php

 

        # Now grab the page or pages we care about.
        wget --load-cookies cookies.txt \
             -p http://server.com/interesting/article.php

 

FTP Options

--no-remove-listing
Don't remove the temporary .listing files generated by FTP retrievals. Normally, these files contain the raw directory listings received from FTP servers. Not removing them can be useful for debugging purposes, or when you want to be able to easily check on the contents of remote server directories (e.g. to verify that a mirror you're running is complete).

Note that even though Wget writes to a known filename for this file, this is not a security hole in the scenario of a user making .listing a symbolic link to /etc/passwd or something and asking "root" to run Wget in his or her directory. Depending on the options used, either Wget will refuse to write to .listing, making the globbing/recursion/time-stamping operation fail, or the symbolic link will be deleted and replaced with the actual .listing file, or the listing will be written to a.listing.number file.

Even though this situation isn't a problem, though, "root" should never run Wget in a non-trusted user's directory. A user could do something as simple as linking index.html to /etc/passwd and asking "root" to run Wget with -N or -r so the file will be overwritten.

--no-glob
Turn off FTP globbing. Globbing refers to the use of shell-like special characters (wildcards), like *?[ and ] to retrieve more than one file from the same directory at once, like:

 

        wget ftp://gnjilux.srk.fer.hr/*.msg

By default, globbing will be turned on if the URL contains a globbing character. This option may be used to turn globbing on or off permanently.

You may have to quote the URL to protect it from being expanded by your shell. Globbing makes Wget look for a directory listing, which is system-specific. This is why it currently works only with Unix FTP servers (and the ones emulating Unix"ls" output).

--passive-ftp
Use the passive FTP retrieval scheme, in which the client initiates the data connection. This is sometimes required for FTPto work behind firewalls.
--retr-symlinks
Usually, when retrieving FTP directories recursively and a symbolic link is encountered, the linked-to file is not downloaded. Instead, a matching symbolic link is created on the local filesystem. The pointed-to file will not be downloaded unless this recursive retrieval would have encountered it separately and downloaded it anyway.

When --retr-symlinks is specified, however, symbolic links are traversed and the pointed-to files are retrieved. At this time, this option does not cause Wget to traverse symlinks to directories and recurse through them, but in the future it should be enhanced to do this.

Note that when retrieving a file (not a directory) because it was specified on the command-line, rather than because it was recursed to, this option has no effect. Symbolic links are always traversed in this case.

--no-http-keep-alive
Turn off the ``keep-alive'' feature for HTTP downloads. Normally, Wget asks the server to keep the connection open so that, when you download more than one document from the same server, they get transferred over the same TCP connection. This saves time and at the same time reduces the load on the server.

This option is useful when, for some reason, persistent (keep-alive) connections don't work for you, for example due to a server bug or due to the inability of server-side scripts to cope with the connections.

 

Recursive Retrieval Options

-r
--recursive
Turn on recursive retrieving.
-l depth
--level=depth
Specify recursion maximum depth level depth. The default maximum depth is 5.
--delete-after
This option tells Wget to delete every single file it downloads, after having done so. It is useful for pre-fetching popular pages through a proxy, e.g.:

 

        wget -r -nd --delete-after http://whatever.com/~popular/page/

The -r option is to retrieve recursively, and -nd to not create directories.

Note that --delete-after deletes files on the local machine. It does not issue the DELE command to remote FTP sites, for instance. Also note that when --delete-after is specified, --convert-links is ignored, so .orig files are simply not created in the first place.

-k
--convert-links
After the download is complete, convert the links in the document to make them suitable for local viewing. This affects not only the visible hyperlinks, but any part of the document that links to external content, such as embedded images, links to style sheets, hyperlinks to non-HTML content, etc.

Each link will be changed in one of the two ways:

*
The links to files that have been downloaded by Wget will be changed to refer to the file they point to as a relative link.

Example: if the downloaded file /foo/doc.html links to /bar/img.gif, also downloaded, then the link in doc.htmlwill be modified to point to ../bar/img.gif. This kind of transformation works reliably for arbitrary combinations of directories.

*
The links to files that have not been downloaded by Wget will be changed to include host name and absolute path of the location they point to.

Example: if the downloaded file /foo/doc.html links to /bar/img.gif (or to ../bar/img.gif), then the link indoc.html will be modified to point to http://hostname/bar/img.gif.

Because of this, local browsing works reliably: if a linked file was downloaded, the link will refer to its local name; if it was not downloaded, the link will refer to its full Internet address rather than presenting a broken link. The fact that the former links are converted to relative links ensures that you can move the downloaded hierarchy to another directory.

Note that only at the end of the download can Wget know which links have been downloaded. Because of that, the work done by -k will be performed at the end of all the downloads.

-K
--backup-converted
When converting a file, back up the original version with a .orig suffix. Affects the behavior of -N.
-m
--mirror
Turn on options suitable for mirroring. This option turns on recursion and time-stamping, sets infinite recursion depth and keeps FTP directory listings. It is currently equivalent to -r -N -l inf --no-remove-listing.
-p
--page-requisites
This option causes Wget to download all the files that are necessary to properly display a given HTML page. This includes such things as inlined images, sounds, and referenced stylesheets.

Ordinarily, when downloading a single HTML page, any requisite documents that may be needed to display it properly are not downloaded. Using -r together with -l can help, but since Wget does not ordinarily distinguish between external and inlined documents, one is generally left with ``leaf documents'' that are missing their requisites.

For instance, say document 1.html contains an "<IMG>" tag referencing 1.gif and an "<A>" tag pointing to external document 2.html. Say that 2.html is similar but that its image is 2.gif and it links to 3.html. Say this continues up to some arbitrarily high number.

If one executes the command:

 

        wget -r -l 2 http://<site>/1.html

then 1.html1.gif2.html2.gif, and 3.html will be downloaded. As you can see, 3.html is without its requisite 3.gifbecause Wget is simply counting the number of hops (up to 2) away from 1.html in order to determine where to stop the recursion. However, with this command:

 

        wget -r -l 2 -p http://<site>/1.html

all the above files and 3.html's requisite 3.gif will be downloaded. Similarly,

 

        wget -r -l 1 -p http://<site>/1.html

will cause 1.html1.gif2.html, and 2.gif to be downloaded. One might think that:

 

        wget -r -l 0 -p http://<site>/1.html

would download just 1.html and 1.gif, but unfortunately this is not the case, because -l 0 is equivalent to -l inf---that is, infinite recursion. To download a single HTML page (or a handful of them, all specified on the command-line or in a -i URLinput file) and its (or their) requisites, simply leave off -r and -l:

 

        wget -p http://<site>/1.html

Note that Wget will behave as if -r had been specified, but only that single page and its requisites will be downloaded. Links from that page to external documents will not be followed. Actually, to download a single page and all its requisites (even if they exist on separate websites), and make sure the lot displays properly locally, this author likes to use a few options in addition to -p:

 

        wget -E -H -k -K -p http://<site>/<document>

To finish off this topic, it's worth knowing that Wget's idea of an external document link is any URL specified in an "<A>"tag, an "<AREA>" tag, or a "<LINK>" tag other than "<LINK REL="stylesheet">".

--strict-comments
Turn on strict parsing of HTML comments. The default is to terminate comments at the first occurrence of -->.

According to specifications, HTML comments are expressed as SGML declarations. Declaration is special markup that begins with <! and ends with >, such as <!DOCTYPE ...>, that may contain comments between a pair of -- delimiters.HTML comments are ``empty declarations'', SGML declarations without any non-comment text. Therefore, <!--foo---> is a valid comment, and so is <!--one--- --two--->, but <!--1--2--> is not.

On the other hand, most HTML writers don't perceive comments as anything other than text delimited with <!-- and -->, which is not quite the same. For example, something like <!------------> works as a valid comment as long as the number of dashes is a multiple of four (!). If not, the comment technically lasts until the next --, which may be at the other end of the document. Because of this, many popular browsers completely ignore the specification and implement what users have come to expect: comments delimited with <!-- and -->.

Until version 1.9, Wget interpreted comments strictly, which resulted in missing links in many web pages that displayed fine in browsers, but had the misfortune of containing non-compliant comments. Beginning with version 1.9, Wget has joined the ranks of clients that implements ``naive'' comments, terminating each comment at the first occurrence of -->.

If, for whatever reason, you want strict comment parsing, use this option to turn it on.

 

Recursive Accept/Reject Options

-A acclist --accept acclist
-R rejlist --reject rejlist
Specify comma-separated lists of file name suffixes or patterns to accept or reject.
-D domain-list
--domains=domain-list
Set domains to be followed. domain-list is a comma-separated list of domains. Note that it does not turn on -H.
--exclude-domains domain-list
Specify the domains that are not to be followed..
--follow-ftp
Follow FTP links from HTML documents. Without this option, Wget will ignore all the FTP links.
--follow-tags=list
Wget has an internal table of HTML tag / attribute pairs that it considers when looking for linked documents during a recursive retrieval. If a user wants only a subset of those tags to be considered, however, he or she should be specify such tags in a comma-separated list with this option.
--ignore-tags=list
This is the opposite of the --follow-tags option. To skip certain HTML tags when recursively looking for documents to download, specify them in a comma-separated list.

In the past, this option was the best bet for downloading a single page and its requisites, using a command-line like:

 

        wget --ignore-tags=a,area -H -k -K -r http://<site>/<document>

However, the author of this option came across a page with tags like "<LINK REL="home" HREF="/">" and came to the realization that specifying tags to ignore was not enough. One can't just tell Wget to ignore "<LINK>", because then stylesheets will not be downloaded. Now the best bet for downloading a single page and its requisites is the dedicated --page-requisites option.

-H
--span-hosts
Enable spanning across hosts when doing recursive retrieving.
-L
--relative
Follow relative links only. Useful for retrieving a specific home page without any distractions, not even those from the same hosts.
-I list
--include-directories=list
Specify a comma-separated list of directories you wish to follow when downloading. Elements of list may contain wildcards.
-X list
--exclude-directories=list
Specify a comma-separated list of directories you wish to exclude from download. Elements of list may contain wildcards.
-np
--no-parent
Do not ever ascend to the parent directory when retrieving recursively. This is a useful option, since it guarantees that only the files below a certain hierarchy will be downloaded.
 

FILES

/usr/local/etc/wgetrc
Default location of the global startup file.
.wgetrc
User startup file.
 

BUGS

You are welcome to send bug reports about GNU Wget to <bug-wget@gnu.org>.

Before actually submitting a bug report, please try to follow a few simple guidelines.

1.
Please try to ascertain that the behavior you see really is a bug. If Wget crashes, it's a bug. If Wget does not behave as documented, it's a bug. If things work strange, but you are not sure about the way they are supposed to work, it might well be a bug.
2.
Try to repeat the bug in as simple circumstances as possible. E.g. if Wget crashes while downloading wget -rl0 -kKE -t5 -Y0 http://yoyodyne.com -o /tmp/log, you should try to see if the crash is repeatable, and if will occur with a simpler set of options. You might even try to start the download at the page where the crash occurred to see if that page somehow triggered the crash.

Also, while I will probably be interested to know the contents of your .wgetrc file, just dumping it into the debug message is probably a bad idea. Instead, you should first try to see if the bug repeats with .wgetrc moved out of the way. Only if it turns out that .wgetrc settings affect the bug, mail me the relevant parts of the file.

3.
Please start Wget with -d option and send the log (or the relevant parts of it). If Wget was compiled without debug support, recompile it. It is much easier to trace bugs with debug support on.
4.
If Wget has crashed, try to run it in a debugger, e.g. "gdb `which wget` core" and type "where" to get the backtrace.
 

SEE ALSO

GNU Info entry for wget.  

责任编辑:yangsai 来源: chinaunix博客
相关推荐

2011-08-24 16:48:36

man中文man

2011-08-15 10:21:09

man中文man

2011-08-11 16:11:49

at中文man

2011-08-25 10:21:56

man.conf中文man

2011-08-12 14:58:05

killall中文man

2011-07-15 16:58:36

ac中文man

2011-08-15 11:10:48

more中文man

2011-08-25 17:03:51

pclose中文man

2011-08-15 14:10:37

tar中文man

2011-08-16 10:42:30

rmmod中文man

2011-08-18 13:57:38

acct中文man

2011-08-23 17:49:36

zdump中文man

2011-08-15 15:10:49

wall中文man

2011-08-23 15:06:03

quotastats中文man

2011-08-15 17:35:50

ar中文man

2011-08-25 09:07:16

suffixes中文man

2011-08-18 15:21:37

autofs中文man

2011-08-25 15:19:39

dirname中文man

2011-08-25 17:34:50

setlinebuf中文man

2011-08-15 15:17:14

ac中文man
点赞
收藏

51CTO技术栈公众号