$Revision: 1.135 $ ($Date: 1998/11/07 00:10:38 $)
The latest version of this FAQ is always available from the main Apache web site, at <http://www.apache.org/docs/misc/FAQ.html>.
If you are reading a text-only version of this FAQ, you may find numbers enclosed in brackets (such as "[12]"). These refer to the list of reference URLs to be found at the end of the document. These references do not appear, and are not needed, for the hypertext version.
ErrorDocument
401
work?
ErrorDocument
and SSI to simplify customized error messages?
struct iovec
" when compiling under Linux?
crypt
function when I attempt to build Apache 1.2.
.htaccess
files are being
ignored.
Apache was originally based on code and ideas found in the most popular HTTP server of the time.. NCSA httpd 1.3 (early 1995). It has since evolved into a far superior system which can rival (and probably surpass) almost any other UNIX based HTTP server in terms of functionality, efficiency and speed.
Since it began, it has been completely rewritten, and includes many new features. Apache is, as of January 1997, the most popular WWW server on the Internet, according to the Netcraft Survey.
To address the concerns of a group of WWW providers and part-time httpd programmers that httpd didn't behave as they wanted it to behave. Apache is an entirely volunteer effort, completely funded by its members, not by commercial sales.
We, of course, owe a great debt to NCSA and their programmers for making the server Apache was based on. We now, however, have our own server, and our project is mostly our own. The Apache Project is an entirely independent venture.
A cute name which stuck. Apache is "A PAtCHy server". It was based on some existing code and a series of "patch files".
For an independent assessment, see Web Compare's comparison chart.
Apache has been shown to be substantially faster than many other free servers. Although certain commercial servers have claimed to surpass Apache's speed (it has not been demonstrated that any of these "benchmarks" are a good way of measuring WWW server speed at any rate), we feel that it is better to have a mostly-fast free server than an extremely-fast server that costs thousands of dollars. Apache is run on sites that get millions of hits per day, and they have experienced no performance difficulties.
Apache is run on over 1.2 million Internet servers (as of July 1998). It has been tested thoroughly by both developers and users. The Apache Group maintains rigorous standards before releasing new versions of their server, and our server runs without a hitch on over one half of all WWW servers available on the Internet. When bugs do show up, we release patches and new versions as soon as they are available.
The Apache project's web site includes a page with a partial list of sites running Apache.
There is no official support for Apache. None of the developers want to be swamped by a flood of trivial questions that can be resolved elsewhere. Bug reports and suggestions should be sent via the bug report page. Other questions should be directed to the comp.infosystems.www.servers.unix or comp.infosystems.www.servers.ms-windows newsgroup (as appropriate for the platform you use), where some of the Apache team lurk, in the company of many other httpd gurus who should be able to help.
Commercial support for Apache is, however, available from a number of third parties.
Indeed there is. See the main Apache web site. There is also a regular electronic publication called Apache Week available. Links to relevant Apache Week articles are included below where appropriate. There are also some Apache-specific books available.
You can find out how to download the source for Apache at the project's main web page.
If you are having trouble with your Apache server software, you should take the following steps:
Apache tries to be helpful when it encounters a problem. In many cases, it will provide some details by writing one or messages to the server error log. Sometimes this is enough for you to diagnose & fix the problem yourself (such as file permissions or the like). The default location of the error log is /usr/local/apache/logs/error_log, but see the ErrorLog directive in your config files for the location on your server.
The latest version of the Apache Frequently-Asked Questions list can always be found at the main Apache web site.
Most problems that get reported to The Apache Group are recorded in the bug database. Please check the existing reports, open and closed, before adding one. If you find that your issue has already been reported, please don't add a "me, too" report. If the original report isn't closed yet, we suggest that you check it periodically. You might also consider contacting the original submitter, because there may be an email exchange going on about the issue that isn't getting recorded in the database.
A lot of common problems never make it to the bug database because there's already high Q&A traffic about them in the comp.infosystems.www.servers.unix newsgroup. Many Apache users, and some of the developers, can be found roaming its virtual halls, so it is suggested that you seek wisdom there. The chances are good that you'll get a faster answer there than from the bug database, even if you don't see your question already posted.
If you've gone through those steps above that are appropriate and have obtained no relief, then please do let The Apache Group know about the problem by logging a bug report.
If your problem involves the server crashing and generating a core dump, please include a backtrace (if possible). As an example,
# cd ServerRoot
# dbx httpd core
(dbx) where
(Substitute the appropriate locations for your
ServerRoot and your httpd and
core files. You may have to use gdb
instead of dbx
.)
Apache attempts to offer all the features and configuration options of NCSA httpd 1.3, as well as many of the additional features found in NCSA httpd 1.4 and NCSA httpd 1.5.
NCSA httpd appears to be moving toward adding experimental features which are not generally required at the moment. Some of the experiments will succeed while others will inevitably be dropped. The Apache philosophy is to add what's needed as and when it is needed.
Friendly interaction between Apache and NCSA developers should ensure that fundamental feature enhancements stay consistent between the two servers for the foreseeable future.
Apache recognizes all files in a directory named as a ScriptAlias as being eligible for execution rather than processing as normal documents. This applies regardless of the file name, so scripts in a ScriptAlias directory don't need to be named "*.cgi" or "*.pl" or whatever. In other words, all files in a ScriptAlias directory are scripts, as far as Apache is concerned.
To persuade Apache to execute scripts in other locations, such as in directories where normal documents may also live, you must tell it how to recognize them - and also that it's okay to execute them. For this, you need to use something like the AddHandler directive.
AddHandler cgi-script .cgi
The server will then recognize that all files in that location (and its logical descendants) that end in ".cgi" are script files, not documents.
In some situations, you might not want to actually allow all files named "*.cgi" to be executable. Perhaps all you want is to enable a particular file in a normal directory to be executable. This can be alternatively accomplished via mod_rewrite and the following steps:
RewriteEngine on
RewriteBase /~foo/bar/
RewriteRule ^quux\.cgi$ - [T=application/x-httpd-cgi]
It means just what it says: the server was expecting a complete set of HTTP headers (one or more followed by a blank line), and didn't get them.
The most common cause of this problem is the script dying before sending the complete set of headers, or possibly any at all, to the server. To see if this is the case, try running the script standalone from an interactive session, rather than as a script under the server. If you get error messages, this is almost certainly the cause of the "premature end of script headers" message.
The second most common cause of this (aside from people not
outputting the required headers at all) is a result of an interaction
with Perl's output buffering. To make Perl flush its buffers
after each output statement, insert the following statements around
the print
or write
statements that send your
HTTP headers:
{
local ($oldbar) = $|;
$cfh = select (STDOUT);
$| = 1;
#
# print your HTTP headers here
#
$| = $oldbar;
select ($cfh);
}
This is generally only necessary when you are calling external
programs from your script that send output to stdout, or if there will
be a long delay between the time the headers are sent and the actual
content starts being emitted. To maximize performance, you should
turn buffer-flushing back off (with $| = 0
or the
equivalent) after the statements that send the headers, as displayed
above.
If your script isn't written in Perl, do the equivalent thing for
whatever language you are using (e.g., for C, call
fflush()
after writing the headers).
Another cause for the "premature end of script headers" message are the RLimitCPU and RLimitMEM directives. You may get the message if the CGI script was killed due to a resource limit.
SSI (an acronym for Server-Side Include) directives allow static HTML documents to be enhanced at run-time (e.g., when delivered to a client by Apache). The format of SSI directives is covered in the mod_include manual; suffice it to say that Apache supports not only SSI but xSSI (eXtended SSI) directives.
Processing a document at run-time is called parsing it; hence the term "parsed HTML" sometimes used for documents that contain SSI instructions. Parsing tends to be extremely resource-consumptive, and is not enabled by default. It can also interfere with the cachability of your documents, which can put a further load on your server. (see the next question for more information about this.)
To enable SSI processing, you need to
AddHandler server-parsed .shtml
This indicates that all files ending in ".shtml" in that location (or its descendants) should be parsed. Note that using ".html" will cause all normal HTML files to be parsed, which may put an inordinate load on your server.
For additional information, see the Apache Week article on Using Server Side Includes.
Since the server is performing run-time processing of your SSI directives, which may change the content shipped to the client, it can't know at the time it starts parsing what the final size of the result will be, or whether the parsed result will always be the same. This means that it can't generate Content-Length or Last-Modified headers. Caches commonly work by comparing the Last-Modified of what's in the cache with that being delivered by the server. Since the server isn't sending that header for a parsed document, whatever's doing the caching can't tell whether the document has changed or not - and so fetches it again to be on the safe side.
You can work around this in some cases by causing an Expires header to be generated. (See the mod_expires documentation for more details.) Another possibility is to use the XBitHack Full mechanism, which tells Apache to send (under certain circumstances detailed in the XBitHack directive description) a Last-Modified header based upon the last modification time of the file being parsed. Note that this may actually be lying to the client if the parsed file doesn't change but the SSI-inserted content does; if the included content changes often, this can result in stale copies being cached.
So you want to include SSI directives in the output from your CGI script, but can't figure out how to do it? The short answer is "you can't." This is potentially a security liability and, more importantly, it can not be cleanly implemented under the current server API. The best workaround is for your script itself to do what the SSIs would be doing. After all, it's generating the rest of the content.
This is a feature The Apache Group hopes to add in the next major release after 1.3.
This is almost always due to having some setting in your config file that sets "Options Includes" or some other setting for your DocumentRoot but not for other directories. If you set it inside a Directory section, then that setting will only apply to that directory.
Apache version 1.1 and above comes with a proxy module. If compiled in, this will make Apache act as a caching-proxy server.
"Multiviews" is the general name given to the Apache server's ability to provide language-specific document variants in response to a request. This is documented quite thoroughly in the content negotiation description page. In addition, Apache Week carried an article on this subject entitled "Content Negotiation Explained".
You are probably running into resource limitations in your
operating system. The most common limitation is the
per-process limit on file descriptors,
which is almost always the cause of problems seen when adding
virtual hosts. Apache often does not give an intuitive error
message because it is normally some library routine (such as
gethostbyname()
) which needs file descriptors and
doesn't complain intelligibly when it can't get them.
Each log file requires a file descriptor, which means that if you are using separate access and error logs for each virtual host, each virtual host needs two file descriptors. Each Listen directive also needs a file descriptor.
Typical values for <n> that we've seen are in the neighborhood of 128 or 250. When the server bumps into the file descriptor limit, it may dump core with a SIGSEGV, it might just hang, or it may limp along and you'll see (possibly meaningful) errors in the error log. One common problem that occurs when you run into a file descriptor limit is that CGI scripts stop being executed properly.
As to what you can do about this:
limit
or
ulimit
commands). For some systems, information on
how to do this is available in the
performance hints page. There is a specific
note for FreeBSD below.
For Windows 95, try modifying your C:\CONFIG.SYS file to include a line like
FILES=300
Remember that you'll need to reboot your Windows 95 system in order for the new value to take effect.
Since this is an operating-system limitation, there's not much else available in the way of solutions.
As of 1.2.1 we have made attempts to work around various limitations involving running with many descriptors. More information is available.
On versions of FreeBSD before 3.0, the FD_SETSIZE define defaults to 256. This means that you will have trouble usefully using more than 256 file descriptors in Apache. This can be increased, but doing so can be tricky.
If you are using a version prior to 2.2, you need to recompile your kernel with a larger FD_SETSIZE. This can be done by adding a line such as:
options FD_SETSIZE nnn
to your kernel config file. Starting at version 2.2, this is no longer necessary.
If you are using a version of 2.1-stable from after 1997/03/10 or 2.2 or 3.0-current from before 1997/06/28, there is a limit in the resolver library that prevents it from using more file descriptors than what FD_SETSIZE is set to when libc is compiled. To increase this, you have to recompile libc with a higher FD_SETSIZE.
In FreeBSD 3.0, the default FD_SETSIZE has been increased to 1024 and the above limitation in the resolver library has been removed.
After you deal with the appropriate changes above, you can increase the setting of FD_SETSIZE at Apache compilation time by adding "-DFD_SETSIZE=nnn" to the EXTRA_CFLAGS line in your Configuration file.
This is almost always due to Apache not being configured to treat the file you are trying to POST to as a CGI script. You can not POST to a normal HTML file; the operation has no meaning. See the FAQ entry on CGIs outside ScriptAliased directories for details on how to configure Apache to treat the file in question as a CGI.
Yes, you can - but it's a very bad idea. Here are some of the reasons:
If you still want to do this in light of the above disadvantages, the method is left as an exercise for the reader. It'll void your Apache warranty, though, and you'll lose all accumulated UNIX guru points.
ErrorDocument 401
work?
You need to use it with a URL in the form "/foo/bar" and not one with a method and hostname such as "http://host/foo/bar". See the ErrorDocument documentation for details. This was incorrectly documented in the past.
ErrorDocument
and SSI to simplify customized error messages?
Have a look at this document.
It shows in example form how you can a combination of XSSI and
negotiation to tailor a set of ErrorDocument
s to your
personal taste, and returning different internationalized error
responses based on the client's native language.
Your Group directive (probably in conf/httpd.conf) needs to name a group that actually exists in the /etc/group file (or your system's equivalent).
Apache does not send automatically send a cookie on every response, unless you have re-compiled it with the mod_usertrack module, and specifically enabled it with the CookieTracking directive. This module has been in Apache since version 1.2. This module may help track users, and uses cookies to do this. If you are not using the data generated by mod_usertrack, do not compile it into Apache.
Firstly, you do not need to compile in mod_cookies in order for your scripts to work (see the previous question for more about mod_cookies). Apache passes on your Set-Cookie header fine, with or without this module. If cookies do not work it will be because your script does not work properly or your browser does not use cookies or is not set-up to accept them.
As of version 1.2, Apache is an HTTP/1.1 (HyperText Transfer Protocol version 1.1) server. This fact is reflected in the protocol version that's included in the response headers sent to a client when processing a request. Unfortunately, low-level Web access classes included in the Java Development Kit (JDK) version 1.0.2 expect to see the version string "HTTP/1.0" and do not correctly interpret the "HTTP/1.1" value Apache is sending (this part of the response is a declaration of what the server can do rather than a declaration of the dialect of the response). The result is that the JDK methods do not correctly parse the headers, and include them with the document content by mistake.
This is definitely a bug in the JDK 1.0.2 foundation classes from Sun, and it has been fixed in version 1.1. However, the classes in question are part of the virtual machine environment, which means they're part of the Web browser (if Java-enabled) or the Java environment on the client system - so even if you develop your classes with a recent JDK, the eventual users might encounter the problem. The classes involved are replaceable by vendors implementing the Java virtual machine environment, and so even those that are based upon the 1.0.2 version may not have this problem.
In the meantime, a workaround is to tell Apache to "fake" an HTTP/1.0 response to requests that come from the JDK methods; this can be done by including a line such as the following in your server configuration files:
BrowserMatch Java1.0 force-response-1.0
BrowserMatch JDK/1.0 force-response-1.0
More information about this issue can be found in the Java and HTTP/1.1 page at the Apache web site.
Because you need to install and configure a script to handle the uploaded files. This script is often called a "PUT" handler. There are several available, but they may have security problems. Using FTP uploads may be easier and more secure, at least for now. For more information, see the Apache Week article Publishing Pages with PUT.
The simple answer is that it was becoming too difficult to keep the version being included with Apache synchronized with the master copy at the FastCGI web site. When a new version of Apache was released, the version of the FastCGI module included with it would soon be out of date.
You can still obtain the FastCGI module for Apache from the master FastCGI web site.
This message almost always indicates that the client disconnected
before Apache reached the point of calling setsockopt()
for the connection. It shouldn't occur for more than about 1% of the
requests your server handles, and it's advisory only in any case.
This is a normal message and nothing about which to be alarmed. It simply means that the client canceled the connection before it had been completely set up - such as by the end-user pressing the "Stop" button. People's patience being what it is, sites with response-time problems or slow network links may experiences this more than high-capacity ones or those with large pipes to the network.
As of Apache 1.3, CGI scripts are essentially not buffered. Every time
your script does a "flush" to output data, that data gets relayed on to
the client. Some scripting languages, for example Perl, have their own
buffering for output - this can be disabled by setting the $|
special variable to 1. Of course this does increase the overall number
of packets being transmitted, which can result in a sense of slowness for
the end user.
Prior to 1.3, you needed to use "nph-" scripts to accomplish non-buffering. Today, the only difference between nph scripts and normal scripts is that nph scripts require the full HTTP headers to be sent.
struct iovec
" when
compiling under Linux?
This is a conflict between your C library includes and your kernel includes. You need to make sure that the versions of both are matched properly. There are two workarounds, either one will solve the problem:
struct iovec
from your C
library includes. It is located in /usr/include/sys/uio.h
.
Or,
-DNO_WRITEV
to the EXTRA_CFLAGS
line in your Configuration and reconfigure/rebuild.
This hurts performance and should only be used as a last resort.
In Apache version 1.2, the error log message
about dumped core includes the directory where the dump file should be
located. However, many Unixes do not allow a process that has
called setuid()
to dump core for security reasons;
the typical Apache setup has the server started as root to bind to
port 80, after which it changes UIDs to a non-privileged user to
serve requests.
Dealing with this is extremely operating system-specific, and may require rebuilding your system kernel. Consult your operating system documentation or vendor for more information about whether your system does this and how to bypass it. If there is a documented way of bypassing it, it is recommended that you bypass it only for the httpd server process if possible.
The canonical location for Apache's core-dump files is the ServerRoot directory. As of Apache version 1.3, the location can be set via the CoreDumpDirectory directive to a different directory. Make sure that this directory is writable by the user the server runs as (as opposed to the user the server is started as).
Two of the most common causes of this are:
EXTRA_CFLAGS=-DMAXIMUM_DNS
This will cause Apache to be very paranoid about making sure a particular host address is really assigned to the name it claims to be. Note that this can incur a significant performance penalty, however, because of all the name resolution requests being sent to a nameserver.
SSL (Secure Socket Layer) data transport requires encryption, and many governments have restrictions upon the import, export, and use of encryption technology. If Apache included SSL in the base package, its distribution would involve all sorts of legal and bureaucratic issues, and it would no longer be freely available. Also, some of the technology required to talk to current clients using SSL is patented by RSA Data Security, who restricts its use without a license.
Some SSL implementations of Apache are available, however; see the "related projects" page at the main Apache web site.
You can find out more about this topic in the Apache Week article about Apache and Secure Transactions.
Even though the registered MIME type for MIDI files is audio/midi, some browsers are not set up to recognize it as such; instead, they look for audio/x-midi. There are two things you can do to address this:
AddType audio/x-midi .mid .midi .kar
Note that this may break browsers that do recognize the audio/midi MIME type unless they're prepared to also handle audio/x-midi the same way.
If the server won't compile on your system, it is probably due to one of the following causes:
The Apache Group tests the ability to build the server on many different platforms. Unfortunately, we can't test all of the OS platforms there are. If you have verified that none of the above issues is the cause of your problem, and it hasn't been reported before, please submit a problem report. Be sure to include complete details, such as the compiler & OS versions and exact error messages.
Apache provides a couple of different ways of doing this. The recommended method is to compile the mod_log_config module into your configuration and use the CustomLog directive.
You can either log the additional information in files other than your normal transfer log, or you can add them to the records already being written. For example:
CustomLog logs/access_log "%h %l %u %t \"%r\" %s %b \"%{Referer}i\" \"%{User-Agent}i\""
This will add the values of the User-agent: and Referer: headers, which indicate the client and the referring page, respectively, to the end of each line in the access log.
You may want to check out the Apache Week article entitled: "Gathering Visitor Information: Customising Your Logfiles".
If you have installed BIND-8
then this is normally due to a conflict between your include files
and your libraries. BIND-8 installs its include files and libraries
/usr/local/include/
and /usr/local/lib/
, while
the resolver that comes with your system is probably installed in
/usr/include/
and /usr/lib/
. If
your system uses the header files in /usr/local/include/
before those in /usr/include/
but you do not use the new
resolver library, then the two versions will conflict.
To resolve this, you can either make sure you use the include files
and libraries that came with your system or make sure to use the
new include files and libraries. Adding -lbind
to the
EXTRA_LDFLAGS
line in your Configuration
file, then re-running Configure, should resolve the
problem. (Apache versions 1.2.* and earlier use
EXTRA_LFLAGS
instead.)
Note:As of BIND 8.1.1, the bind libraries and files are installed under /usr/local/bind by default, so you should not run into this problem. Should you want to use the bind resolvers you'll have to add the following to the respective lines:
EXTRA_CFLAGS=-I/usr/local/bind/include
EXTRA_LDFLAGS=-L/usr/local/bind/lib
EXTRA_LIBS=-lbind
When you access a directory without a trailing "/", Apache needs to send what is called a redirect to the client to tell it to add the trailing slash. If it did not do so, relative URLs would not work properly. When it sends the redirect, it needs to know the name of the server so that it can include it in the redirect. There are two ways for Apache to find this out; either it can guess, or you can tell it. If your DNS is configured correctly, it can normally guess without any problems. If it is not, however, then you need to tell it.
Add a ServerName directive to the config file to tell it what the domain name of the server is.
There are several ways to do this; some of the more popular ones are to use the mod_auth, mod_auth_db, or mod_auth_dbm modules.
For an explanation on how to implement these restrictions, see Apache Week's articles on Using User Authentication or DBM User Authentication.
This variable is set and thus available in SSI or CGI scripts if and only if the requested document was protected by access authentication. For an explanation on how to implement these restrictions, see Apache Week's articles on Using User Authentication or DBM User Authentication.
Hint: When using a CGI script to receive the data of a HTML FORM notice that protecting the document containing the FORM is not sufficient to provide REMOTE_USER to the CGI script. You have to protect the CGI script, too. Or alternatively only the CGI script (then authentication happens only after filling out the form).
Use the Satisfy directive,
in particular the Satisfy Any
directive, to require
that only one of the access restrictions be met. For example,
adding the following configuration to a .htaccess
or server configuration file would restrict access to people who
either are accessing the site from a host under domain.com or
who can supply a valid username and password:
deny from all
allow from .domain.com
AuthType Basic
AuthUserFile /usr/local/apache/conf/htpasswd.users
AuthName "special directory"
require valid-user
satisfy any
See the user authentication question and the mod_access module for details on how the above directives work.
The mod_info module allows you to use a Web browser to see how your server is configured. Among the information it displays is the list modules and their configuration directives. The "current" values for the directives are not necessarily those of the running server; they are extracted from the configuration files themselves at the time of the request. If the files have been changed since the server was last reloaded, the display will will not match the values actively in use. If the files and the path to the files are not readable by the user as which the server is running (see the User directive), then mod_info cannot read them in order to list their values. An entry will be made in the error log in this event, however.
Your kernel has been built without SysV IPC support. You will have to
rebuild the kernel with that support enabled (it's under the
"General Setup" submenu). Documentation for
kernel building is beyond the scope of this FAQ; you should consult
the
Kernel HOWTO,
or the documentation provided with your distribution, or a
Linux newsgroup/mailing list.
As a last-resort workaround, you can
comment out the #define USE_SHMGET_SCOREBOARD
definition in the
LINUX section of
src/conf.h and rebuild the server (prior to 1.3b4, simply
removing #define HAVE_SHMGET
would have sufficed).
This will produce a server which is slower and less reliable.
Under normal circumstances, the Apache access control modules will pass unrecognized user IDs on to the next access control module in line. Only if the user ID is recognized and the password is validated (or not) will it give the usual success or "authentication failed" messages.
However, if the last access module in line 'declines' the validation request (because it has never heard of the user ID or because it is not configured), the http_request handler will give one of the following, confusing, errors:
This does not mean that you have to add an 'AuthUserFile /dev/null' line as some magazines suggest!
The solution is to ensure that at least the last module is authoritative and CONFIGURED. By default, mod_auth is authoritative and will give an OK/Denied, but only if it is configured with the proper AuthUserFile. Likewise, if a valid group is required. (Remember that the modules are processed in the reverse order from that in which they appear in your compile-time Configuration file.)
A typical situation for this error is when you are using the mod_auth_dbm, mod_auth_msql, mod_auth_mysql, mod_auth_anon or mod_auth_cookie modules on their own. These are by default not authoritative, and this will pass the buck on to the (non-existent) next authentication module when the user ID is not in their respective database. Just add the appropriate 'XXXAuthoritative yes' line to the configuration.
In general it is a good idea (though not terribly efficient) to have the file-based mod_auth a module of last resort. This allows you to access the web server with a few special passwords even if the databases are down or corrupted. This does cost a file open/seek/close for each request in a protected area.
Some organizations feel very strongly about keeping the authentication information on a different machine than the webserver. With the mod_auth_msql, mod_auth_mysql, and other SQL modules connecting to (R)DBMses this is quite possible. Just configure an explicit host to contact.
Be aware that with mSQL and Oracle, opening and closing these database connections is very expensive and time consuming. You might want to look at the code in the auth_* modules and play with the compile time flags to alleviate this somewhat, if your RDBMS licences allow for it.
You have probably configured the Host by specifying a FQHN, and thus the libmsql will use a full blown TCP/IP socket to talk to the database, rather than a fast internal device. The libmsql, the mSQL FAQ, and the mod_auth_msql documentation warn you about this. If you have to use different hosts, check out the mod_auth_msql code for some compile time flags which might - or might not - suit you.
There is a collection of Practical Solutions for URL-Manipulation where you can find all typical solutions the author of mod_rewrite currently knows of. If you have more interesting rulesets which solve particular problems not currently covered in this document, send it to Ralf S. Engelschall for inclusion. The other webmasters will thank you for avoiding the reinvention of the wheel.
There is an article from Ralf S. Engelschall about URL-manipulations based on mod_rewrite in the "iX Multiuser Multitasking Magazin" issue #12/96. The german (original) version can be read online at <http://www.heise.de/ix/artikel/9612149/>, the English (translated) version can be found at <http://www.heise.de/ix/artikel/E/9612149/>.
Hmmm... there are a lot of reasons. First, mod_rewrite itself is a powerful module which can help you in really all aspects of URL rewriting, so it can be no trivial module per definition. To accomplish its hard job it uses software leverage and makes use of a powerful regular expression library by Henry Spencer which is an integral part of Apache since its version 1.2. And regular expressions itself can be difficult to newbies, while providing the most flexible power to the advanced hacker.
On the other hand mod_rewrite has to work inside the Apache API environment and needs to do some tricks to fit there. For instance the Apache API as of 1.x really was not designed for URL rewriting at the .htaccess level of processing. Or the problem of multiple rewrites in sequence, which is also not handled by the API per design. To provide this features mod_rewrite has to do some special (but API compliant!) handling which leads to difficult processing inside the Apache kernel. While the user usually doesn't see anything of this processing, it can be difficult to find problems when some of your RewriteRules seem not to work.
Use "RewriteLog somefile" and "RewriteLogLevel 9" and have a precise look at the steps the rewriting engine performs. This is really the only one and best way to debug your rewriting configuration.
If the rule starts with /somedir/... make sure that really no /somedir exists on the filesystem if you don't want to lead the URL to match this directory, i.e., there must be no root directory named somedir on the filesystem. Because if there is such a directory, the URL will not get prefixed with DocumentRoot. This behaviour looks ugly, but is really important for some other aspects of URL rewriting.
You can't! The reason is: First, case translations for arbitrary length URLs cannot be done via regex patterns and corresponding substitutions. One need a per-character pattern like sed/Perl tr|..|..| feature. Second, just making URLs always upper or lower case will not resolve the complete problem of case-INSENSITIVE URLs, because actually the URLs had to be rewritten to the correct case-variant residing on the filesystem because in later processing Apache needs to access the file. And Unix filesystem is always case-SENSITIVE.
But there is a module named mod_speling.c
(yes, it is named
this way!) out there on the net. Try this one.
Because you have to enable the engine for every virtual host explicitly due to security concerns. Just add a "RewriteEngine on" to your virtual host configuration parts.
There is only one ugly solution: You have to surround the complete flag argument by quotation marks ("[E=...]"). Notice: The argument to quote here is not the argument to the E-flag, it is the argument of the Apache config file parser, i.e., the third argument of the RewriteRule here. So you have to write "[E=any text with whitespaces]".
The Common Gateway Interface (CGI) specification can be found at the original NCSA site < http://hoohoo.ncsa.uiuc.edu/cgi/interface.html>. This version hasn't been updated since 1995, and there have been some efforts to update it.
A new draft is being worked on with the intent of making it an informational RFC; you can find out more about this project at <http://web.golux.com/coar/cgi/>.
Yes, Apache is Year 2000 compliant.
Apache internally never stores years as two digits.
On the HTTP protocol level RFC1123-style addresses are generated
which is the only format a HTTP/1.1-compliant server should
generate. To be compatible with older applications Apache
recognizes ANSI C's asctime()
and
RFC850-/RFC1036-style date formats, too.
The asctime()
format uses four-digit years,
but the RFC850 and RFC1036 date formats only define a two-digit year.
If Apache sees such a date with a value less than 70 it assumes that
the century is 20 rather than 19.
Some aspects of Apache's output may use two-digit years, such as the automatic listing of directory contents provided by mod_autoindex with the FancyIndexing option enabled, but it is improper to depend upon such displays for specific syntax. And even that issue is being addressed by the developers; a future version of Apache should allow you to format that display as you like.
Although Apache is Year 2000 compliant, you may still get problems if the underlying OS has problems with dates past year 2000 (e.g., OS calls which accept or return year numbers). Most (UNIX) systems store dates internally as signed 32-bit integers which contain the number of seconds since 1st January 1970, so the magic boundary to worry about is the year 2038 and not 2000. But modern operating systems shouldn't cause any trouble at all.
In versions of Apache prior to 1.3b2, there was a lot of confusion regarding address-based virtual hosts and (HTTP/1.1) name-based virtual hosts, and the rules concerning how the server processed <VirtualHost> definitions were very complex and not well documented.
Apache 1.3b2 introduced a new directive, NameVirtualHost, which simplifies the rules quite a bit. However, changing the rules like this means that your existing name-based <VirtualHost> containers probably won't work correctly immediately following the upgrade.
To correct this problem, add the following line to the beginning of your server configuration file, before defining any virtual hosts:
NameVirtualHost n.n.n.n
Replace the "n.n.n.n" with the IP address to which the name-based virtual host names resolve; if you have multiple name-based hosts on multiple addresses, repeat the directive for each address.
Make sure that your name-based <VirtualHost> blocks contain ServerName and possibly ServerAlias directives so Apache can be sure to tell them apart correctly.
Please see the Apache Virtual Host documentation for further details about configuration.
RedHat Linux versions 4.x (and possibly earlier) RPMs contain various nasty scripts which do not stop or restart Apache properly. These can affect you even if you're not running the RedHat supplied RPMs.
If you're using the default install then you're probably running Apache 1.1.3, which is outdated. From RedHat's ftp site you can pick up a more recent RPM for Apache 1.2.x. This will solve one of the problems.
If you're using a custom built Apache rather than the RedHat RPMs
then you should rpm -e apache
. In particular you want
the mildly broken /etc/logrotate.d/apache
script to be
removed, and you want the broken /etc/rc.d/init.d/httpd
(or httpd.init
) script to be removed. The latter is
actually fixed by the apache-1.2.5 RPMs but if you're building your
own Apache then you probably don't want the RedHat files.
We can't stress enough how important it is for folks, especially vendors to follow the stopping Apache directions given in our documentation. In RedHat's defense, the broken scripts were necessary with Apache 1.1.x because the Linux support in 1.1.x was very poor, and there were various race conditions on all platforms. None of this should be necessary with Apache 1.2 and later.
You should read the previous note about
problems with RedHat installations. It is entirely likely that your
installation has start/stop/restart scripts which were built for
an earlier version of Apache. Versions earlier than 1.2.0 had
various race conditions that made it necessary to use
kill -9
at times to take out all the httpd servers.
But that should not be necessary any longer. You should follow
the directions on how to stop
and restart Apache.
As of Apache 1.3 there is a script
src/support/apachectl
which, after a bit of
customization, is suitable for starting, stopping, and restarting
your server.
RedHat messed up and forgot to put a content type for .htm
files into /etc/mime.types
. Edit /etc/mime.types
,
find the line containing html
and add htm
to it.
Then restart your httpd server:
kill -HUP `cat /var/run/httpd.pid`
Then clear your browsers' caches. (Many browsers won't re-examine the content type after they've reloaded a page.)
crypt
function when I attempt to build Apache 1.2.
glibc puts the crypt
function into a separate
library. Edit your src/Configuration
file and set this:
EXTRA_LIBS=-lcrypt
Then re-run src/Configure and re-execute the make.
These are symptoms of a fine locking problem, which usually means that the server is trying to use a synchronization file on an NFS filesystem.
Because of its parallel-operation model, the Apache Web server needs to provide some form of synchronization when accessing certain resources. One of these synchronization methods involves taking out locks on a file, which means that the filesystem whereon the lockfile resides must support locking. In many cases this means it can't be kept on an NFS-mounted filesystem.
To cause the Web server to work around the NFS locking limitations, include a line such as the following in your server configuration files:
LockFile /var/run/apache-lock
The directory should not be generally writable (e.g., don't use /var/tmp). See the LockFile documentation for more information.
Check out Dean Gaudet's performance tuning page.
Regular expressions are a way of describing a pattern - for example, "all the words that begin with the letter A" or "every 10-digit phone number" or even "Every sentence with two commas in it, and no capital letter Q". Regular expressions (aka "regexp"s) are useful in Apache because they let you apply certain attributes against collections of files or resources in very flexible ways - for example, all .gif and .jpg files under any "images" directory could be written as /.*\/images\/.*[jpg|gif]/.
The best overview around is probably the one which comes with Perl. We implement a simple subset of Perl's regexp support, but it's still a good way to learn what they mean. You can start by going to the CPAN page on regular expressions, and branching out from there.
GCC parses your system header files and produces a modified subset which it uses for compiling. This behaviour ties GCC tightly to the version of your operating system. So, for example, if you were running IRIX 5.3 when you built GCC and then upgrade to IRIX 6.2 later, you will have to rebuild GCC. Similarly for Solaris 2.4, 2.5, or 2.5.1 when you upgrade to 2.6. Sometimes you can type "gcc -v" and it will tell you the version of the operating system it was built against.
If you fail to do this, then it is very likely that Apache will fail
to build. One of the most common errors is with readv
,
writev
, or uio.h
. This is not a
bug with Apache. You will need to re-install GCC.
.htaccess
files are being ignored.
This is almost always due to your
AllowOverride directive being set incorrectly for the directory in
question. If it is set to None
then .htaccess files will
not even be looked for. If you do have one that is set, then be certain
it covers the directory you are trying to use the .htaccess file in.
This is normally accomplished by ensuring it is inside the proper
Directory container.
The Apache Group encourages patches from outside developers. There are 2 main "types" of patches: small bugfixes and general improvements. Bugfixes should be submitting using the Apache bug report page. Improvements, modifications, and additions should follow the instructions below.
In general, the first course of action is to be a member of the
new-httpd@apache.org mailing list. This indicates to the Group
that
you are closely following the latest Apache developments. Your patch file
should be
generated using either 'diff -c
' or
'diff -u
' against the
latest CVS tree. To submit your patch, send email to
new-httpd@apache.org
with a Subject: line that starts with [PATCH] and
includes a general description of the patch. In the body of the message, the
patch should be clearly described and then included at the end of the
message.
If the patch-file is long, you can note a URL to the file instead of the
file itself. Use of MIME enclosures/attachments should be avoided.
Be prepared to respond to any questions about your patches and possibly defend your code. If your patch results in a lot of discussion, you may be asked to submit an updated patch that incorporate all changes and suggestions.
This is a known problem with certain versions of the AIX C compiler. IBM are working on a solution, and the issue is being tracked by problem report #2312.
The simple answer is: "It hasn't." This misconception is usually caused by the site in question having migrated to the Apache Web server software, but not having migrated the site's content yet. When Apache is installed, the default page that gets installed tells the Webmaster the installation was successful. The expectation is that this default page will be replaced with the site's real content. If it doesn't, complain to the Webmaster, not to the Apache project -- we just make the software and aren't responsible for what people do (or don't do) with it.
The short answer is: "You aren't." Usually when someone thinks the Apache site is originating spam, it's because they've traced the spam to a Web site, and the Web site says it's using Apache. See the previous FAQ entry for more details on this phenomenon.
No marketing spam originates from the Apache site. The only mail that comes from the site goes only to addresses that have been requested to receive the mail.