前一篇文章分析到了org.apache.coyote.http11.AbstractHttp11Processor类process方法,以解析请求头的getInputBuffer().parseRequestLine方法调用为例,看到如何从Socket的IO流中取出字节流数据,根据Http协议将字节流组装到Tomcat内部的org.apache.coyote.Request对象的相关属性中。
本文和下一篇文章将会解释构造好的Tomcat的内部请求对象从Connector到Engine到Host到Context最后到Servlet的过程。
回到org.apache.coyote.http11.AbstractHttp11Processor类process方法的源码:
- public SocketState process(SocketWrapper<S> socketWrapper)
- throws IOException {
- RequestInfo rp = request.getRequestProcessor();
- rp.setStage(org.apache.coyote.Constants.STAGE_PARSE);
- // Setting up the I/O
- setSocketWrapper(socketWrapper);
- getInputBuffer().init(socketWrapper, endpoint);
- getOutputBuffer().init(socketWrapper, endpoint);
- // Flags
- error = false;
- keepAlive = true;
- comet = false;
- openSocket = false;
- sendfileInProgress = false;
- readComplete = true;
- if (endpoint.getUsePolling()) {
- keptAlive = false;
- } else {
- keptAlive = socketWrapper.isKeptAlive();
- }
- if (disableKeepAlive()) {
- socketWrapper.setKeepAliveLeft(0);
- }
- while (!error && keepAlive && !comet && !isAsync() &&
- upgradeInbound == null && !endpoint.isPaused()) {
- // Parsing the request header
- try {
- setRequestLineReadTimeout();
- if (!getInputBuffer().parseRequestLine(keptAlive)) {
- if (handleIncompleteRequestLineRead()) {
- break;
- }
- }
- if (endpoint.isPaused()) {
- // 503 - Service unavailable
- response.setStatus(503);
- error = true;
- } else {
- // Make sure that connectors that are non-blocking during
- // header processing (NIO) only set the start time the first
- // time a request is processed.
- if (request.getStartTime() < 0) {
- request.setStartTime(System.currentTimeMillis());
- }
- keptAlive = true;
- // Set this every time in case limit has been changed via JMX
- request.getMimeHeaders().setLimit(endpoint.getMaxHeaderCount());
- // Currently only NIO will ever return false here
- if (!getInputBuffer().parseHeaders()) {
- // We've read part of the request, don't recycle it
- // instead associate it with the socket
- openSocket = true;
- readComplete = false;
- break;
- }
- if (!disableUploadTimeout) {
- setSocketTimeout(connectionUploadTimeout);
- }
- }
- } catch (IOException e) {
- if (getLog().isDebugEnabled()) {
- getLog().debug(
- sm.getString("http11processor.header.parse"), e);
- }
- error = true;
- break;
- } catch (Throwable t) {
- ExceptionUtils.handleThrowable(t);
- UserDataHelper.Mode logMode = userDataHelper.getNextMode();
- if (logMode != null) {
- String message = sm.getString(
- "http11processor.header.parse");
- switch (logMode) {
- case INFO_THEN_DEBUG:
- message += sm.getString(
- "http11processor.fallToDebug");
- //$FALL-THROUGH$
- case INFO:
- getLog().info(message);
- break;
- case DEBUG:
- getLog().debug(message);
- }
- }
- // 400 - Bad Request
- response.setStatus(400);
- adapter.log(request, response, 0);
- error = true;
- }
- if (!error) {
- // Setting up filters, and parse some request headers
- rp.setStage(org.apache.coyote.Constants.STAGE_PREPARE);
- try {
- prepareRequest();
- } catch (Throwable t) {
- ExceptionUtils.handleThrowable(t);
- if (getLog().isDebugEnabled()) {
- getLog().debug(sm.getString(
- "http11processor.request.prepare"), t);
- }
- // 400 - Internal Server Error
- response.setStatus(400);
- adapter.log(request, response, 0);
- error = true;
- }
- }
- if (maxKeepAliveRequests == 1) {
- keepAlive = false;
- } else if (maxKeepAliveRequests > 0 &&
- socketWrapper.decrementKeepAlive() <= 0) {
- keepAlive = false;
- }
- // Process the request in the adapter
- if (!error) {
- try {
- rp.setStage(org.apache.coyote.Constants.STAGE_SERVICE);
- adapter.service(request, response);
- // Handle when the response was committed before a serious
- // error occurred. Throwing a ServletException should both
- // set the status to 500 and set the errorException.
- // If we fail here, then the response is likely already
- // committed, so we can't try and set headers.
- if(keepAlive && !error) { // Avoid checking twice.
- error = response.getErrorException() != null ||
- (!isAsync() &&
- statusDropsConnection(response.getStatus()));
- }
- setCometTimeouts(socketWrapper);
- } catch (InterruptedIOException e) {
- error = true;
- } catch (HeadersTooLargeException e) {
- error = true;
- // The response should not have been committed but check it
- // anyway to be safe
- if (!response.isCommitted()) {
- response.reset();
- response.setStatus(500);
- response.setHeader("Connection", "close");
- }
- } catch (Throwable t) {
- ExceptionUtils.handleThrowable(t);
- getLog().error(sm.getString(
- "http11processor.request.process"), t);
- // 500 - Internal Server Error
- response.setStatus(500);
- adapter.log(request, response, 0);
- error = true;
- }
- }
- // Finish the handling of the request
- rp.setStage(org.apache.coyote.Constants.STAGE_ENDINPUT);
- if (!isAsync() && !comet) {
- if (error) {
- // If we know we are closing the connection, don't drain
- // input. This way uploading a 100GB file doesn't tie up the
- // thread if the servlet has rejected it.
- getInputBuffer().setSwallowInput(false);
- }
- endRequest();
- }
- rp.setStage(org.apache.coyote.Constants.STAGE_ENDOUTPUT);
- // If there was an error, make sure the request is counted as
- // and error, and update the statistics counter
- if (error) {
- response.setStatus(500);
- }
- request.updateCounters();
- if (!isAsync() && !comet || error) {
- getInputBuffer().nextRequest();
- getOutputBuffer().nextRequest();
- }
- if (!disableUploadTimeout) {
- if(endpoint.getSoTimeout() > 0) {
- setSocketTimeout(endpoint.getSoTimeout());
- } else {
- setSocketTimeout(0);
- }
- }
- rp.setStage(org.apache.coyote.Constants.STAGE_KEEPALIVE);
- if (breakKeepAliveLoop(socketWrapper)) {
- break;
- }
- }
- rp.setStage(org.apache.coyote.Constants.STAGE_ENDED);
- if (error || endpoint.isPaused()) {
- return SocketState.CLOSED;
- } else if (isAsync() || comet) {
- return SocketState.LONG;
- } else if (isUpgrade()) {
- return SocketState.UPGRADING;
- } else {
- if (sendfileInProgress) {
- return SocketState.SENDFILE;
- } else {
- if (openSocket) {
- if (readComplete) {
- return SocketState.OPEN;
- } else {
- return SocketState.LONG;
- }
- } else {
- return SocketState.CLOSED;
- }
- }
- }
- }
概述一下这个方法做的事情:第3到26行主要是在初始化变量。关注接下来一大段的while循环里面的代码,第31到121行在解析请求头(具体解析过程原理参见前一篇文章),第123到159行将请求交由适配器(adapter)处理,第161到200行结束请求的处理(做一些收尾工作,比如废弃剩下的无意义字节流数据,设置相应状态码等)。
请求对象在容器中的流转在第127行:
- adapter.service(request, response);
这里的adapter对象是在Http11Processor对象创建的时候设置的,见org.apache.coyote.http11.Http11Protocol.Http11ConnectionHandler类的createProcessor方法:
- protected Http11Processor createProcessor() {
- Http11Processor processor = new Http11Processor(
- proto.getMaxHttpHeaderSize(), (JIoEndpoint)proto.endpoint,
- proto.getMaxTrailerSize());
- processor.setAdapter(proto.adapter);
- processor.setMaxKeepAliveRequests(proto.getMaxKeepAliveRequests());
- processor.setKeepAliveTimeout(proto.getKeepAliveTimeout());
- processor.setConnectionUploadTimeout(
- proto.getConnectionUploadTimeout());
- processor.setDisableUploadTimeout(proto.getDisableUploadTimeout());
- processor.setCompressionMinSize(proto.getCompressionMinSize());
- processor.setCompression(proto.getCompression());
- processor.setNoCompressionUserAgents(proto.getNoCompressionUserAgents());
- processor.setCompressableMimeTypes(proto.getCompressableMimeTypes());
- processor.setRestrictedUserAgents(proto.getRestrictedUserAgents());
- processor.setSocketBuffer(proto.getSocketBuffer());
- processor.setMaxSavePostSize(proto.getMaxSavePostSize());
- processor.setServer(proto.getServer());
- processor.setDisableKeepAlivePercentage(
- proto.getDisableKeepAlivePercentage());
- register(processor);
- return processor;
- }
可以看到adapter对象设置的是org.apache.coyote.http11.Http11Protocol的adapter变量,而该变量是在Connector类的initInternal方法中设值的:
- protected void initInternal() throws LifecycleException {
- super.initInternal();
- // Initialize adapter
- adapter = new CoyoteAdapter(this);
- protocolHandler.setAdapter(adapter);
- // Make sure parseBodyMethodsSet has a default
- if( null == parseBodyMethodsSet ) {
- setParseBodyMethods(getParseBodyMethods());
- }
- if (protocolHandler.isAprRequired() &&
- !AprLifecycleListener.isAprAvailable()) {
- throw new LifecycleException(
- sm.getString("coyoteConnector.protocolHandlerNoApr",
- getProtocolHandlerClassName()));
- }
- try {
- protocolHandler.init();
- } catch (Exception e) {
- throw new LifecycleException
- (sm.getString
- ("coyoteConnector.protocolHandlerInitializationFailed"), e);
- }
- // Initialize mapper listener
- mapperListener.init();
- }
第6、7行就是初始化adapter对象并设值到Http11Protocol对象中的。
所以上面看到的adapter.service(request, response)方法实际执行的是org.apache.catalina.connector.CoyoteAdapter类的service方法:
- public void service(org.apache.coyote.Request req,
- org.apache.coyote.Response res)
- throws Exception {
- Request request = (Request) req.getNote(ADAPTER_NOTES);
- Response response = (Response) res.getNote(ADAPTER_NOTES);
- if (request == null) {
- // Create objects
- request = connector.createRequest();
- request.setCoyoteRequest(req);
- response = connector.createResponse();
- response.setCoyoteResponse(res);
- // Link objects
- request.setResponse(response);
- response.setRequest(request);
- // Set as notes
- req.setNote(ADAPTER_NOTES, request);
- res.setNote(ADAPTER_NOTES, response);
- // Set query string encoding
- req.getParameters().setQueryStringEncoding
- (connector.getURIEncoding());
- }
- if (connector.getXpoweredBy()) {
- response.addHeader("X-Powered-By", POWERED_BY);
- }
- boolean comet = false;
- boolean async = false;
- try {
- // Parse and set Catalina and configuration specific
- // request parameters
- req.getRequestProcessor().setWorkerThreadName(Thread.currentThread().getName());
- boolean postParseSuccess = postParseRequest(req, request, res, response);
- if (postParseSuccess) {
- //check valves if we support async
- request.setAsyncSupported(connector.getService().getContainer().getPipeline().isAsyncSupported());
- // Calling the container
- connector.getService().getContainer().getPipeline().getFirst().invoke(request, response);
- if (request.isComet()) {
- if (!response.isClosed() && !response.isError()) {
- if (request.getAvailable() || (request.getContentLength() > 0 && (!request.isParametersParsed()))) {
- // Invoke a read event right away if there are available bytes
- if (event(req, res, SocketStatus.OPEN)) {
- comet = true;
- res.action(ActionCode.COMET_BEGIN, null);
- }
- } else {
- comet = true;
- res.action(ActionCode.COMET_BEGIN, null);
- }
- } else {
- // Clear the filter chain, as otherwise it will not be reset elsewhere
- // since this is a Comet request
- request.setFilterChain(null);
- }
- }
- }
- AsyncContextImpl asyncConImpl = (AsyncContextImpl)request.getAsyncContext();
- if (asyncConImpl != null) {
- async = true;
- } else if (!comet) {
- request.finishRequest();
- response.finishResponse();
- if (postParseSuccess &&
- request.getMappingData().context != null) {
- // Log only if processing was invoked.
- // If postParseRequest() failed, it has already logged it.
- // If context is null this was the start of a comet request
- // that failed and has already been logged.
- ((Context) request.getMappingData().context).logAccess(
- request, response,
- System.currentTimeMillis() - req.getStartTime(),
- false);
- }
- req.action(ActionCode.POST_REQUEST , null);
- }
- } catch (IOException e) {
- // Ignore
- } finally {
- req.getRequestProcessor().setWorkerThreadName(null);
- // Recycle the wrapper request and response
- if (!comet && !async) {
- request.recycle();
- response.recycle();
- } else {
- // Clear converters so that the minimum amount of memory
- // is used by this processor
- request.clearEncoders();
- response.clearEncoders();
- }
- }
- }
这段代码中可以看到入参org.apache.coyote.Request对象被转成了org.apache.catalina.connector.Request对象,后一类型的对象才是在Tomcat容器流转时真正传递的对象。重点关注第42行和第47行。
在第42行调用了postParseRequest方法:
- /**
- * Parse additional request parameters.
- */
- protected boolean postParseRequest(org.apache.coyote.Request req,
- Request request,
- org.apache.coyote.Response res,
- Response response)
- throws Exception {
- // XXX the processor may have set a correct scheme and port prior to this point,
- // in ajp13 protocols dont make sense to get the port from the connector...
- // otherwise, use connector configuration
- if (! req.scheme().isNull()) {
- // use processor specified scheme to determine secure state
- request.setSecure(req.scheme().equals("https"));
- } else {
- // use connector scheme and secure configuration, (defaults to
- // "http" and false respectively)
- req.scheme().setString(connector.getScheme());
- request.setSecure(connector.getSecure());
- }
- // FIXME: the code below doesnt belongs to here,
- // this is only have sense
- // in Http11, not in ajp13..
- // At this point the Host header has been processed.
- // Override if the proxyPort/proxyHost are set
- String proxyName = connector.getProxyName();
- int proxyPort = connector.getProxyPort();
- if (proxyPort != 0) {
- req.setServerPort(proxyPort);
- }
- if (proxyName != null) {
- req.serverName().setString(proxyName);
- }
- // Copy the raw URI to the decodedURI
- MessageBytes decodedURI = req.decodedURI();
- decodedURI.duplicate(req.requestURI());
- // Parse the path parameters. This will:
- // - strip out the path parameters
- // - convert the decodedURI to bytes
- parsePathParameters(req, request);
- // URI decoding
- // %xx decoding of the URL
- try {
- req.getURLDecoder().convert(decodedURI, false);
- } catch (IOException ioe) {
- res.setStatus(400);
- res.setMessage("Invalid URI: " + ioe.getMessage());
- connector.getService().getContainer().logAccess(
- request, response, 0, true);
- return false;
- }
- // Normalization
- if (!normalize(req.decodedURI())) {
- res.setStatus(400);
- res.setMessage("Invalid URI");
- connector.getService().getContainer().logAccess(
- request, response, 0, true);
- return false;
- }
- // Character decoding
- convertURI(decodedURI, request);
- // Check that the URI is still normalized
- if (!checkNormalize(req.decodedURI())) {
- res.setStatus(400);
- res.setMessage("Invalid URI character encoding");
- connector.getService().getContainer().logAccess(
- request, response, 0, true);
- return false;
- }
- // Set the remote principal
- String principal = req.getRemoteUser().toString();
- if (principal != null) {
- request.setUserPrincipal(new CoyotePrincipal(principal));
- }
- // Set the authorization type
- String authtype = req.getAuthType().toString();
- if (authtype != null) {
- request.setAuthType(authtype);
- }
- // Request mapping.
- MessageBytes serverName;
- if (connector.getUseIPVHosts()) {
- serverName = req.localName();
- if (serverName.isNull()) {
- // well, they did ask for it
- res.action(ActionCode.REQ_LOCAL_NAME_ATTRIBUTE, null);
- }
- } else {
- serverName = req.serverName();
- }
- if (request.isAsyncStarted()) {
- //TODO SERVLET3 - async
- //reset mapping data, should prolly be done elsewhere
- request.getMappingData().recycle();
- }
- boolean mapRequired = true;
- String version = null;
- while (mapRequired) {
- if (version != null) {
- // Once we have a version - that is it
- mapRequired = false;
- }
- // This will map the the latest version by default
- connector.getMapper().map(serverName, decodedURI, version,
- request.getMappingData());
- request.setContext((Context) request.getMappingData().context);
- request.setWrapper((Wrapper) request.getMappingData().wrapper);
- // Single contextVersion therefore no possibility of remap
- if (request.getMappingData().contexts == null) {
- mapRequired = false;
- }
- // If there is no context at this point, it is likely no ROOT context
- // has been deployed
- if (request.getContext() == null) {
- res.setStatus(404);
- res.setMessage("Not found");
- // No context, so use host
- Host host = request.getHost();
- // Make sure there is a host (might not be during shutdown)
- if (host != null) {
- host.logAccess(request, response, 0, true);
- }
- return false;
- }
- // Now we have the context, we can parse the session ID from the URL
- // (if any). Need to do this before we redirect in case we need to
- // include the session id in the redirect
- String sessionID = null;
- if (request.getServletContext().getEffectiveSessionTrackingModes()
- .contains(SessionTrackingMode.URL)) {
- // Get the session ID if there was one
- sessionID = request.getPathParameter(
- SessionConfig.getSessionUriParamName(
- request.getContext()));
- if (sessionID != null) {
- request.setRequestedSessionId(sessionID);
- request.setRequestedSessionURL(true);
- }
- }
- // Look for session ID in cookies and SSL session
- parseSessionCookiesId(req, request);
- parseSessionSslId(request);
- sessionID = request.getRequestedSessionId();
- if (mapRequired) {
- if (sessionID == null) {
- // No session means no possibility of needing to remap
- mapRequired = false;
- } else {
- // Find the context associated with the session
- Object[] objs = request.getMappingData().contexts;
- for (int i = (objs.length); i > 0; i--) {
- Context ctxt = (Context) objs[i - 1];
- if (ctxt.getManager().findSession(sessionID) != null) {
- // Was the correct context already mapped?
- if (ctxt.equals(request.getMappingData().context)) {
- mapRequired = false;
- } else {
- // Set version so second time through mapping the
- // correct context is found
- version = ctxt.getWebappVersion();
- // Reset mapping
- request.getMappingData().recycle();
- break;
- }
- }
- }
- if (version == null) {
- // No matching context found. No need to re-map
- mapRequired = false;
- }
- }
- }
- if (!mapRequired && request.getContext().getPaused()) {
- // Found a matching context but it is paused. Mapping data will
- // be wrong since some Wrappers may not be registered at this
- // point.
- try {
- Thread.sleep(1000);
- } catch (InterruptedException e) {
- // Should never happen
- }
- // Reset mapping
- request.getMappingData().recycle();
- mapRequired = true;
- }
- }
- // Possible redirect
- MessageBytes redirectPathMB = request.getMappingData().redirectPath;
- if (!redirectPathMB.isNull()) {
- String redirectPath = urlEncoder.encode(redirectPathMB.toString());
- String query = request.getQueryString();
- if (request.isRequestedSessionIdFromURL()) {
- // This is not optimal, but as this is not very common, it
- // shouldn't matter
- redirectPath = redirectPath + ";" +
- SessionConfig.getSessionUriParamName(
- request.getContext()) +
- "=" + request.getRequestedSessionId();
- }
- if (query != null) {
- // This is not optimal, but as this is not very common, it
- // shouldn't matter
- redirectPath = redirectPath + "?" + query;
- }
- response.sendRedirect(redirectPath);
- request.getContext().logAccess(request, response, 0, true);
- return false;
- }
- // Filter trace method
- if (!connector.getAllowTrace()
- && req.method().equalsIgnoreCase("TRACE")) {
- Wrapper wrapper = request.getWrapper();
- String header = null;
- if (wrapper != null) {
- String[] methods = wrapper.getServletMethods();
- if (methods != null) {
- for (int i=0; i<methods.length; i++) {
- if ("TRACE".equals(methods[i])) {
- continue;
- }
- if (header == null) {
- header = methods[i];
- } else {
- header += ", " + methods[i];
- }
- }
- }
- }
- res.setStatus(405);
- res.addHeader("Allow", header);
- res.setMessage("TRACE method is not allowed");
- request.getContext().logAccess(request, response, 0, true);
- return false;
- }
- return true;
- }
这段代码的主要作用是给org.apache.catalina.connector.Request对象设值,其中第113到117行:
- // This will map the the latest version by default
- connector.getMapper().map(serverName, decodedURI, version,
- request.getMappingData());
- request.setContext((Context) request.getMappingData().context);
- request.setWrapper((Wrapper) request.getMappingData().wrapper);
看下map方法的代码,注意该方法的最后一个入参是request.getMappingData():
- public void map(MessageBytes host, MessageBytes uri, String version,
- MappingData mappingData)
- throws Exception {
- if (host.isNull()) {
- host.getCharChunk().append(defaultHostName);
- }
- host.toChars();
- uri.toChars();
- internalMap(host.getCharChunk(), uri.getCharChunk(), version,
- mappingData);
- }
可以看到这里最后调用了org.apache.tomcat.util.http.mapper.Mapper类的internalMap方法,并且该方法最后一个入参实际上是上一段代码提到的request.getMappingData()。看下internalMap方法里面做了些什么:
- /**
- * Map the specified URI.
- */
- private final void internalMap(CharChunk host, CharChunk uri,
- String version, MappingData mappingData) throws Exception {
- uri.setLimit(-1);
- Context[] contexts = null;
- Context context = null;
- ContextVersion contextVersion = null;
- int nesting = 0;
- // Virtual host mapping
- if (mappingData.host == null) {
- Host[] hosts = this.hosts;
- int pos = findIgnoreCase(hosts, host);
- if ((pos != -1) && (host.equalsIgnoreCase(hosts[pos].name))) {
- mappingData.host = hosts[pos].object;
- contexts = hosts[pos].contextList.contexts;
- nesting = hosts[pos].contextList.nesting;
- } else {
- if (defaultHostName == null) {
- return;
- }
- pos = find(hosts, defaultHostName);
- if ((pos != -1) && (defaultHostName.equals(hosts[pos].name))) {
- mappingData.host = hosts[pos].object;
- contexts = hosts[pos].contextList.contexts;
- nesting = hosts[pos].contextList.nesting;
- } else {
- return;
- }
- }
- }
- // Context mapping
- if (mappingData.context == null) {
- int pos = find(contexts, uri);
- if (pos == -1) {
- return;
- }
- int lastSlash = -1;
- int uriEnd = uri.getEnd();
- int length = -1;
- boolean found = false;
- while (pos >= 0) {
- if (uri.startsWith(contexts[pos].name)) {
- length = contexts[pos].name.length();
- if (uri.getLength() == length) {
- found = true;
- break;
- } else if (uri.startsWithIgnoreCase("/", length)) {
- found = true;
- break;
- }
- }
- if (lastSlash == -1) {
- lastSlash = nthSlash(uri, nesting + 1);
- } else {
- lastSlash = lastSlash(uri);
- }
- uri.setEnd(lastSlash);
- pos = find(contexts, uri);
- }
- uri.setEnd(uriEnd);
- if (!found) {
- if (contexts[0].name.equals("")) {
- context = contexts[0];
- }
- } else {
- context = contexts[pos];
- }
- if (context != null) {
- mappingData.contextPath.setString(context.name);
- }
- }
- if (context != null) {
- ContextVersion[] contextVersions = context.versions;
- int versionCount = contextVersions.length;
- if (versionCount > 1) {
- Object[] contextObjects = new Object[contextVersions.length];
- for (int i = 0; i < contextObjects.length; i++) {
- contextObjects[i] = contextVersions[i].object;
- }
- mappingData.contexts = contextObjects;
- }
- if (version == null) {
- // Return the latest version
- contextVersion = contextVersions[versionCount - 1];
- } else {
- int pos = find(contextVersions, version);
- if (pos < 0 || !contextVersions[pos].name.equals(version)) {
- // Return the latest version
- contextVersion = contextVersions[versionCount - 1];
- } else {
- contextVersion = contextVersions[pos];
- }
- }
- mappingData.context = contextVersion.object;
- }
- // Wrapper mapping
- if ((contextVersion != null) && (mappingData.wrapper == null)) {
- internalMapWrapper(contextVersion, uri, mappingData);
- }
- }
说白了就是给该方法的入参mappingData的几个实例变量设置值,比如mappingData.host、mappingData.contextPath、mappingData.contexts、mappingData.wrapper,回到上一段提到的mappingData变量实际上是org.apache.catalina.connector.Request对象内置变量mappingData。回到上面提到的要重点关注的org.apache.catalina.connector.CoyoteAdapter的postParseRequest方法的114到117行:
- connector.getMapper().map(serverName, decodedURI, version,
- request.getMappingData());
- request.setContext((Context) request.getMappingData().context);
- request.setWrapper((Wrapper) request.getMappingData().wrapper);
上面之所以不厌其烦的把实现代码贴出来就是为了能够看清楚这三行代码的具体含义,即通过map方法的调用设置request的成员变量mappingData的成员变量host、context、warp信息,接着从mappingData中取出context和wrapper,直接设置到request对象的成员变量context、wrapper中。下图是上面所描述的关键代码调用过程的时序图:
本文不再仔细分析host、context、warp的匹配过程,请读者自己阅读org.apache.tomcat.util.http.mapper.Mapper类源码,这里大致说下其匹配原理,在org.apache.tomcat.util.http.mapper.Mapper类中有几个内部类Host、Context、Wrapper,Mapper类内部分别有这几种类型的成员变量,在Tomcat容器启动的时候会调用org.apache.catalina.connector.Connector类的startInternal方法(具体启动过程分析参见前文),该方法最后一行:
- mapperListener.start();
这里将会调用org.apache.catalina.connector.MapperListener类的startInternal方法:
- public void startInternal() throws LifecycleException {
- setState(LifecycleState.STARTING);
- // Find any components that have already been initialized since the
- // MBean listener won't be notified as those components will have
- // already registered their MBeans
- findDefaultHost();
- Engine engine = (Engine) connector.getService().getContainer();
- addListeners(engine);
- Container[] conHosts = engine.findChildren();
- for (Container conHost : conHosts) {
- Host host = (Host) conHost;
- if (!LifecycleState.NEW.equals(host.getState())) {
- // Registering the host will register the context and wrappers
- registerHost(host);
- }
- }
- }
在第18行调用当前类的registerHost方法:
- private void registerHost(Host host) {
- String[] aliases = host.findAliases();
- mapper.addHost(host.getName(), aliases, host);
- for (Container container : host.findChildren()) {
- if (container.getState().isAvailable()) {
- registerContext((Context) container);
- }
- }
- if(log.isDebugEnabled()) {
- log.debug(sm.getString("mapperListener.registerHost",
- host.getName(), domain, connector));
- }
- }
第8行在registerHost方法中会调用registerContext方法,在registerContext方法中会调用registerWrapper方法。第4行看到调用了上述mapper对象的addHost方法,在registerContext方法中会调用mapper对象的mapper.addContextVersion方法,在registerWrapper方法中会调用mapper对象的mapper.addWrapper方法。
所以在Tomcat容器启动过程中会将在用的Host、Context、Wrapper组件同时维护到与一个Connector相关的Mapper对象里,这样才会在容器接收到一次请求的时候可以根据请求的URL等信息匹配到具体的host、context、wrapper。
本文中提到的wrapper实际上是Tomcat容器内部对于Servlet的封装,可以认为是一对一的关系。看下Tomcat容器的组件结构图:
在Service内只有一个Engine,但可能有多个Connector,在Engine内部Engine和Host,Host和Context,Context和Wrapper都是一对多的关系。但浏览器发出一次请求连接并不需要也不可能让部署在Tomcat中的所有Web应用的所有Servlet类都执行一遍,本文所说的Map机制就是为了Connector在接收到一次Socket连接时转化成请求后,能够找到Engine下具体哪个Host、哪个Context、哪个Wrapper来执行这个请求。下一篇文章会看到容器是如果一步一步让请求在与本次请求相关的组件中流转的。