问题
打开一个大的文本文件,使用cat命令还是more命令好?
如何产生一个1亿行的文本文件?
分析:
使用什么方法生成一个文件sc.txt --》重定向
1亿行 --》for
需要多少时间?
产生一个大的文件
编写脚本
root@nginx:~/lianxi# vim bigfile.sh
root@nginx:~/lianxi# cat bigfile.sh
#!/bin/bash
#产生1亿行的内容
for i in {1..100000000}
do
echo "$i welcome to changsha,I am study cat and more" >>linux.txt
done
root@nginx:~/lianxi# time bash bigfile.sh
已杀死
real 1m32.047s
user 0m16.528s
sys 0m42.085s
root@nginx:~/lianxi#
为什么我们的程序在运行的时候被杀死?
占内存太多
OOM out of memory 内存溢出
当一个程序需要的内存空间超过我们的内存的大小的时候,内核会杀死这个程序,导致程序运行终止。
因为用for循环产生的数值太大,在执行脚本的时候,会占用很多内存,导致被系统杀死。所以需要调小for循环产生的数值;再用cat重定向来完成这个大文件。
重新生成大的文件
第一步:编写脚本 bigfile.sh
root@nginx:~/lianxi# cat bigfile.sh
#!/bin/bash
#产生1百万行的内容
for i in {1..1000000}
do
echo "$i welcome to changsha,I am study cat and more" >>linux.txt
done
第二步:用cat 重定向出一个大文件
root@nginx:~/lianxi# cat linux.txt >>zhang.txt
root@nginx:~/lianxi# cat linux.txt >>zhang.txt
root@nginx:~/lianxi# cat linux.txt >>zhang.txt
root@nginx:~/lianxi# cat linux.txt >>zhang.txt
root@nginx:~/lianxi# cat linux.txt >>zhang.txt
[root@localhost lianxi2]# du -sh zhang.txt
3.5G zhang.txt
第三步:查看内存大小
root@nginx:~/lianxi# free -m
total used free shared buff/cache available
Mem: 1982 409 460 2 1112 1381
mem 代表内存
total 总的大小
used 使用的内存
free 剩余的内存cpu和内存的资源消耗 --》判断这台机器是否繁忙 --》卡
top 查看系统资源的消耗情况
第四步:清除内存里的缓存里的数据
root@nginx:~/lianxi# echo 3 >/proc/sys/vm/drop_caches
root@nginx:~/lianxi# free -m
total used free shared buff/cache available
Mem: 1982 417 1443 2 122 1424
第五步:用cat与more分别来测试
root@nginx:~/lianxi# cat zhang.txt
root@nginx:~/lianxi# top
MiB Mem : 1982.5 total, 1405.9 free, 410.7 used, 166.0 buff/cache
root@nginx:~/lianxi# more zhang.txt
288 welcome to changsha,I am study cat and more
289 welcome to changsha,I am study cat and more
290 welcome to changsha,I am study cat and more
291 welcome to changsha,I am study cat and more
292 welcome to changsha,I am study cat and more
293 welcome to changsha,I am study cat and more
294 welcome to changsha,I am study cat and more
295 welcome to changsha,I am study cat and more
296 welcome to changsha,I am study cat and more
297 welcome to changsha,I am study cat and more
298 welcome to changsha,I am study cat and more
299 welcome to changsha,I am study cat and more
300 welcome to changsha,I am study cat and more
301 welcome to changsha,I am study cat and more
302 welcome to changsha,I am study cat and more
303 welcome to changsha,I am study cat and more
304 welcome to changsha,I am study cat and more
305 welcome to changsha,I am study cat and more
306 welcome to changsha,I am study cat and more
307 welcome to changsha,I am study cat and more
308 welcome to changsha,I am study cat and more
309 welcome to changsha,I am study cat and more
310 welcome to changsha,I am study cat and more
311 welcome to changsha,I am study cat and more
312 welcome to changsha,I am study cat and more
313 welcome to changsha,I am study cat and more
314 welcome to changsha,I am study cat and more
315 welcome to changsha,I am study cat and more
316 welcome to changsha,I am study cat and more
317 welcome to changsha,I am study cat and more
318 welcome to changsha,I am study cat and more
319 welcome to changsha,I am study cat and more
320 welcome to changsha,I am study cat and more
321 welcome to changsha,I am study cat and more
322 welcome to changsha,I am study cat and more
323 welcome to changsha,I am study cat and more
324 welcome to changsha,I am study cat and more
325 welcome to changsha,I am study cat and more
326 welcome to changsha,I am study cat and more
327 welcome to changsha,I am study cat and more
328 welcome to changsha,I am study cat and more
329 welcome to changsha,I am study cat and more
--More--(0%)
root@nginx:~/lianxi# top
MiB Mem : 1982.5 total, 1386.4 free, 411.4 used, 184.7 buff/cache
文件有3.5G,而我们的内存只有2G,为什么cat命令在查看这给文件的时候,没有把内存全部消耗完?
cat 查看大文件的时候,消耗的cpu、内存并不是特别多,内存的消耗是慢慢的增多,时间久了会消耗比较多的内存。
总结
cat命令消耗内存比较多,more消耗内存比较少。
cat 查看大文件的时候,消耗的cpu、内存并不是特别多,内存的消耗是慢慢的增多,时间久了会消耗比较多的内存。