### 本地代码flink streaming读取远程环境的kafka的数据,写入远程环境的HDFS中;
public static void main(String[] args) throws Exception {
// set up the streaming execution environment
final StreamExecutionEnvironment env =StreamExecutionEnvironment.getExecutionEnvironment();
env.enableCheckpointing(5000);
env.setStreamTimeCharacteristic(TimeCharacteristic.EventTime);
Properties properties = new Properties();
//目标环境的IP地址和端口号
properties.setProperty("bootstrap.servers", "192.168.0.1:9092");//kafka
//kafka版本0.8需要;
// properties.setProperty("zookeeper.connect", "192.168.0.1:2181");//zookeepe
properties.setProperty("group.id", "test-consumer-group"); //group.id
//第一种方式:
//这里很重要,填写hdfs-site.xml和core-site.xml的路径,可以把目标环境上的hadoop的这两个配置拉到本地来,这个是我放在了项目的resources目录