(人口估算)美国人口调查局基于以下假设进行人口估算:每7秒有一个人诞生每 13 秒有一个人死亡每 45 秒有一个移民迁人。编写一个程序,显示未来5年的每年的人口数。

(人口估算)美国人口调查局基于以下假设进行人口估算:
每7秒有一个人诞生
每 13 秒有一个人死亡
每 45 秒有一个移民迁人
编写一个程序,显示未来5年的每年的人口数。假设当前的人口是 312032486,每年有
365 天。提示:Java 中,两个整数相除,结果还是整数,小数部分被去掉。例如,5/4 等
于 1(而不是 1.25),10/4 等于 2(而不是 2.5)。如果想得到有小数部分的精确结果,进
行除法运算的两个值之一必须是一个具有小数点的数值。例如,5.0/4 等于 1.25,10/4.0
等于 2.5。

package myjava;
import java.math.*;
public class cy {
	public static void main(String[]args)
	{
		long sum=312032486;
		double a,b,c,i;
		int t=365*24*60*60;
		a=t/7.0;
		b=t/13.0;
		c=t/45.0;
		for(i=0;i<5;i++)
		{
			sum+=a-b-c;
			System.out.println("人口数为:"+sum);
		}
	}
}

运行结果:

好的,我知道你的问题了。不用担心,我会保密的。对于你的问题,可以使用Hadoop的MapReduce框架来进行处理。以下是实现这两个任务的步骤: 1. 统计数据文件中每一每个州的总人口数量 首先,需要编写一个Mapper程序,将数据文件中的每一行按照份和州分组,输出键值对<份_州, 人口数量>。代码示例如下: ```java public class PopulationMapper extends Mapper<LongWritable, Text, Text, LongWritable> { private Text outputKey = new Text(); private LongWritable outputValue = new LongWritable(); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String[] fields = value.toString().split(","); String year = fields[0]; String state = fields[1]; long population = Long.parseLong(fields[2]); outputKey.set(year + "_" + state); outputValue.set(population); context.write(outputKey, outputValue); } } ``` 然后,需要编写一个Reducer程序,将Mapper输出的键值对按照份和州再次分组,并将人口数量相加得到每一每个州的总人口数量。代码示例如下: ```java public class PopulationReducer extends Reducer<Text, LongWritable, Text, LongWritable> { private LongWritable outputValue = new LongWritable(); @Override protected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException { long sum = 0; for (LongWritable value : values) { sum += value.get(); } outputValue.set(sum); context.write(key, outputValue); } } ``` 最后,在Driver程序中配置MapReduce作业的输入输出路径、Mapper和Reducer类,并提交作业到Hadoop集群运行即可。代码示例如下: ```java public class PopulationJob { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "PopulationJob"); job.setJarByClass(PopulationJob.class); job.setMapperClass(PopulationMapper.class); job.setReducerClass(PopulationReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(LongWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); System.exit(job.waitForCompletion(true) ? 0 : 1); } } ``` 2. 统计2008所有州人口密度,并按降序排列 同样地,需要编写一个Mapper程序,将数据文件中的每一行按照州分组,输出键值对<州, 人口数量>。代码示例如下: ```java public class DensityMapper extends Mapper<LongWritable, Text, Text, LongWritable> { private Text outputKey = new Text(); private LongWritable outputValue = new LongWritable(); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String[] fields = value.toString().split(","); String year = fields[0]; String state = fields[1]; long population = Long.parseLong(fields[2]); if ("2008".equals(year)) { outputKey.set(state); outputValue.set(population); context.write(outputKey, outputValue); } } } ``` 然后,需要编写一个Reducer程序,将Mapper输出的键值对按照州分组,并将人口数量相加得到每个州的总人口数量。代码示例如下: ```java public class DensityReducer extends Reducer<Text, LongWritable, Text, DoubleWritable> { private DoubleWritable outputValue = new DoubleWritable(); @Override protected void reduce(Text key, Iterable<LongWritable> values, Context context) throws IOException, InterruptedException { long sum = 0; for (LongWritable value : values) { sum += value.get(); } double density = (double) sum / 1000000; // 人口密度 = 总人口数量 / 平方英里 outputValue.set(density); context.write(key, outputValue); } } ``` 最后,在Driver程序中配置MapReduce作业的输入输出路径、Mapper和Reducer类,并将Reducer输出的键值对按照值降序排列输出到文件中。代码示例如下: ```java public class DensityJob { public static void main(String[] args) throws IOException, ClassNotFoundException, InterruptedException { Configuration conf = new Configuration(); Job job = Job.getInstance(conf, "DensityJob"); job.setJarByClass(DensityJob.class); job.setMapperClass(DensityMapper.class); job.setReducerClass(DensityReducer.class); job.setOutputKeyClass(Text.class); job.setOutputValueClass(DoubleWritable.class); FileInputFormat.addInputPath(job, new Path(args[0])); FileOutputFormat.setOutputPath(job, new Path(args[1])); job.waitForCompletion(true); Job sortJob = Job.getInstance(conf, "DensitySortJob"); sortJob.setJarByClass(DensityJob.class); sortJob.setMapperClass(InverseMapper.class); sortJob.setReducerClass(DescendingSortReducer.class); sortJob.setOutputKeyClass(DoubleWritable.class); sortJob.setOutputValueClass(Text.class); FileInputFormat.addInputPath(sortJob, new Path(args[1])); FileOutputFormat.setOutputPath(sortJob, new Path(args[2])); sortJob.waitForCompletion(true); } } ``` 注意,为了按照降序排列,还需要编写一个InverseMapper类和一个DescendingSortReducer类,代码示例如下: ```java public class InverseMapper extends Mapper<LongWritable, Text, DoubleWritable, Text> { private DoubleWritable outputKey = new DoubleWritable(); private Text outputValue = new Text(); @Override protected void map(LongWritable key, Text value, Context context) throws IOException, InterruptedException { String[] fields = value.toString().split("\t"); String state = fields[0]; double density = Double.parseDouble(fields[1]); outputKey.set(-density); // 取相反数以实现降序排列 outputValue.set(state); context.write(outputKey, outputValue); } } public class DescendingSortReducer extends Reducer<DoubleWritable, Text, Text, DoubleWritable> { private Text outputKey = new Text(); @Override protected void reduce(DoubleWritable key, Iterable<Text> values, Context context) throws IOException, InterruptedException { double density = -key.get(); // 取相反数还原为原来的值 for (Text value : values) { outputKey.set(value); context.write(outputKey, new DoubleWritable(density)); } } } ``` 这样,就完成了两个任务的处理。希望这些代码对你有所帮助!
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值