java Logging API 使用

当你在开发程序的时候, 调试(debugging)和日志(logging)都是非常重要的工作, 但是, 现在有太多的 logging API 问世, 因为他们都不错, 很难做一个抉择. 国外 java 论坛对于这些 logging 方式也是有一番讨论.

而 common logging 就是一个在这几个不同的 logging API 中建立小小的桥梁.目前在 Java 中最有名的 Log 方式, 首推是 Log4j, 另是 JDK 1.4 Logging API. 除此之外, 还有 Avalon 中用的 LogKit 等等 . 而 commons-logging 也有实现一些基本 的 logging 方式为 NoOpLog 及 SimpleLog. 对于他们的比较不在这次讨论范围,

有兴趣者请自行参阅参考文件.

快速使用 Logging 其实 logging 非常简单去使用, 将 commons-logging.jar 放到 /WEB-INF/lib 之下.接著写以下的代码

LoggingTest.java 

package com.softleader.newspaper.java.opensource;

import org.apache.commons.logging.Log;
import org.apache.commons.logging.LogFactory;

public class LoggingTest {

Log log = LogFactory.getLog(LoggingTest.class);

public void hello() {
log.error("ERROR");
log.debug("DEBUG");
log.warn("WARN");
log.info("INFO");
log.trace("TRACE");
System.out.println("OKOK");
}



在 / 放置一个 jsp 测试 test-commons-logging.jsp

<%@ page import="com.softleader.newspaper.java.opensource.LoggingTest" %>
<% LoggingTest test = new LoggingTest(); test.hello();%> 

你将会看到 tomcat console 会有下面输出 

log4j:WARN No appenders could be found for logger (com.softleader.newspaper.java.opensource.LoggingTest).
log4j:WARN Please initialize the log4j system properly.OKOK

是因为你还没有配置 commons-logging.properties, 马上会为你介绍 ~~~. 

设定 commons-logging.properties 你可以设置你的 log factory 是要使用哪一个 我以 Log4J 为例子 在 /WEB-INF/classes/commons-logging.properties 中写入 
org.apache.commons.logging.Log=org.apache.commons.logging.impl.Log4JCategoryLog 
如果你 server 是使用 jdk1.4 以上的版本 
可以使用 org.apache.commons.logging.impl.Jdk14Logger
接著根据你的 Logger 撰写符合他的 properties 拿 Log4j 为例子 你就要在 
/WEB-INF/classes/ 下放置一个 log4j.properties

//日志输出到文件
log4j.rootLogger=DEBUG, A_default
log4j.appender.A_default=org.apache.log4j.RollingFileAppender
log4j.appender.A_default.File=c://log/test.log
log4j.appender.A_default.MaxFileSize=4000KB
log4j.appender.A_default.MaxBackupIndex=10
log4j.appender.A_default.layout=org.apache.log4j.PatternLayout
log4j.appender.A_default.layout.ConversionPattern=%d{ISO8601} - %p - %m%n

# 比较完整的输出
# log4j.appender.A_default.layout.ConversionPattern=%d %-5p [%t] %-17c{2} (%13F:%L) %3x - %m%n
//日志输出到控制台
log4j.rootLogger=INFO, A1
log4j.appender.A1=org.apache.log4j.ConsoleAppender
log4j.appender.A1.layout=org.apache.log4j.PatternLayout
log4j.appender.A1.layout.ConversionPattern=%-d{yyyy-MM-dd HH:mm:ss,SSS} [%c]-[%p] %m%n


此时你去执行 test-commons-logging.jsp 输出的内容, 就会记录在你的 c:/log 目录的 test.log 中了 ps:如果没有相关的 class 会使用到 SimpLog, 此时要设定的是 
simplelog.properties 结论以我自己本身使用的经验, Log4j 可以满足所有工程师, 所以我也是直接使用 log4j 而没有使用 commons-logging. 

不过为了增加产品的通用性, 避免移植时候的麻烦, 新的产品及项目, 我会将他改成 commons-logging api 去调用. 

如果你对commons-logging的工作原理不是很了解,请参考<commons-logging的使用方法>

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
好的,我可以为您提供一个 Java API 连接 Kerberos 认证的 Elasticsearch 的示例代码。 首先,您需要在 Maven 项目中添加以下 Elasticsearch 和 Kerberos 相关的依赖: ``` <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-client</artifactId> <version>7.15.1</version> </dependency> <dependency> <groupId>org.elasticsearch.client</groupId> <artifactId>elasticsearch-rest-high-level-client</artifactId> <version>7.15.1</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-to-slf4j</artifactId> <version>2.14.1</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-api</artifactId> <version>2.14.1</version> </dependency> <dependency> <groupId>org.apache.logging.log4j</groupId> <artifactId>log4j-core</artifactId> <version>2.14.1</version> </dependency> <dependency> <groupId>com.sun.security.auth.module</groupId> <artifactId>jaas</artifactId> <version>1.8.0_212</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-common</artifactId> <version>3.3.1</version> </dependency> <dependency> <groupId>org.apache.hadoop</groupId> <artifactId>hadoop-auth</artifactId> <version>3.3.1</version> </dependency> ``` 接下来,您需要创建一个 `RestHighLevelClient` 对象并为其配置 Kerberos 认证,示例代码如下: ``` import org.apache.hadoop.conf.Configuration; import org.apache.hadoop.security.UserGroupInformation; import org.elasticsearch.client.RestClient; import org.elasticsearch.client.RestHighLevelClient; import org.elasticsearch.client.sniff.Sniffer; import org.elasticsearch.common.settings.Settings; import org.elasticsearch.common.transport.TransportAddress; import org.elasticsearch.common.unit.TimeValue; import org.elasticsearch.common.util.concurrent.ThreadContext; import org.elasticsearch.transport.client.PreBuiltTransportClient; import org.elasticsearch.xpack.core.security.authc.support.UsernamePasswordToken; import org.elasticsearch.xpack.security.authc.AuthenticationToken; import org.elasticsearch.xpack.security.authc.support.DefaultAuthenticationFailureHandler; import org.elasticsearch.xpack.security.authc.support.UsernamePasswordToken; import org.elasticsearch.xpack.security.transport.netty4.SecurityNetty4Transport; import java.io.IOException; import java.net.InetAddress; import java.net.InetSocketAddress; import java.net.UnknownHostException; import java.security.PrivilegedAction; import java.util.concurrent.TimeUnit; public class ElasticsearchKerberosClient { private final String clusterName; private final String[] nodeIps; private final String realm; private final String username; private final String password; private final String serviceName; private final String keytabPath; private RestHighLevelClient client; private Sniffer sniffer; public ElasticsearchKerberosClient(String clusterName, String[] nodeIps, String realm, String username, String password, String serviceName, String keytabPath) { this.clusterName = clusterName; this.nodeIps = nodeIps; this.realm = realm; this.username = username; this.password = password; this.serviceName = serviceName; this.keytabPath = keytabPath; } public void init() throws Exception { final Configuration conf = new Configuration(); conf.set("hadoop.security.authentication", "Kerberos"); UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab(username, keytabPath); final Settings settings = Settings.builder() .put("cluster.name", clusterName) .put("xpack.security.user", username + ":" + password) .put(SecurityNetty4Transport.SSL_KEYSTORE_PASSWORD_SETTING.getKey(), password) .put(SecurityNetty4Transport.SSL_TRUSTSTORE_PASSWORD_SETTING.getKey(), password) .put(SecurityNetty4Transport.SSL_KEYSTORE_PATH_SETTING.getKey(), "path/to/keystore.jks") .put(SecurityNetty4Transport.SSL_TRUSTSTORE_PATH_SETTING.getKey(), "path/to/truststore.jks") .put(SecurityNetty4Transport.SSL_KEYSTORE_TYPE_SETTING.getKey(), "JKS") .put(SecurityNetty4Transport.SSL_TRUSTSTORE_TYPE_SETTING.getKey(), "JKS") .put(SecurityNetty4Transport.SSL_HTTP_ENABLED_SETTING.getKey(), true) .put(SecurityNetty4Transport.SSL_TRANSPORT_ENABLED_SETTING.getKey(), true) .put(SecurityNetty4Transport.SSL_KEYSTORE_KEY_PASSWORD_SETTING.getKey(), password) .put("xpack.security.transport.ssl.verification_mode", "certificate") .put("xpack.security.transport.ssl.enabled", true) .put("xpack.security.authc.token.enabled", true) .put("xpack.security.authc.realms.kerberos.krb5.files", "/etc/krb5.conf") .put("xpack.security.authc.realms.kerberos.krb5.service_name", serviceName) .put("xpack.security.authc.realms.kerberos.acceptor_principal", "HTTP/_HOST@REALM") .build(); final RestClient restClient = RestClient.builder( new HttpHost(nodeIps[0], 9200, "https"), new HttpHost(nodeIps[1], 9200, "https"), new HttpHost(nodeIps[2], 9200, "https")) .setHttpClientConfigCallback(httpClientBuilder -> { httpClientBuilder.disableAuthCaching(); return httpClientBuilder; }) .setRequestConfigCallback(requestConfigBuilder -> requestConfigBuilder.setConnectTimeout((int) TimeUnit.MINUTES.toMillis(1)) .setConnectionRequestTimeout((int) TimeUnit.MINUTES.toMillis(1)) .setSocketTimeout((int) TimeUnit.MINUTES.toMillis(1))) .build(); final ThreadContext threadContext = new ThreadContext(settings); final DefaultAuthenticationFailureHandler failureHandler = new DefaultAuthenticationFailureHandler(settings, threadContext); final Sniffer sniffer = Sniffer.builder(restClient) .setSniffIntervalMillis(30000) .setFailureListener(new Sniffer.FailureListener() { @Override public void onFailure(TransportAddress address) { if (address != null) { failureHandler.authenticationFailed(address.toString(), null); } } }) .build(); this.sniffer = sniffer; final RestHighLevelClient client = new RestHighLevelClient(restClient) { @Override public AuthenticationToken authenticate(AuthenticationToken token) { if (token instanceof UsernamePasswordToken) { final UsernamePasswordToken upToken = (UsernamePasswordToken) token; final String upTokenUsername = upToken.username(); final String upTokenPassword = new String(upToken.credentials().clone()); return new UsernamePasswordToken(upTokenUsername, upTokenPassword.toCharArray()); } else { return token; } } }; this.client = client; } public void close() { try { if (client != null) { client.close(); } if (sniffer != null) { sniffer.close(); } } catch (IOException e) { e.printStackTrace(); } } public RestHighLevelClient getClient() { return client; } public static void main(String[] args) throws Exception { final String clusterName = "elasticsearch"; final String[] nodeIps = {"127.0.0.1", "127.0.0.2", "127.0.0.3"}; final String realm = "YOUR-REALM.COM"; final String username = "YOUR-USERNAME"; final String password = "YOUR-PASSWORD"; final String serviceName = "elasticsearch"; final String keytabPath = "/path/to/keytab"; final ElasticsearchKerberosClient esClient = new ElasticsearchKerberosClient(clusterName, nodeIps, realm, username, password, serviceName, keytabPath); esClient.init(); final RestHighLevelClient client = esClient.getClient(); // TODO: 使用 client 对象进行 Elasticsearch 查询操作 esClient.close(); } } ``` 请根据您的实际情况,修改示例代码中的参数和配置。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值