计算机专业英语chapter012,计算机专业英语 chapter_1.ppt

计算机专业英语 chapter_1

Chapter 1 Information Technology, The Internet, and You;Competencies;Competencies能力,要求After you have read this chapter, you should be able to: Explain the five parts of an information system: people, procedures, software, hardware, and data. 信息系统的五大组成部分:人,用户文档,软件,硬件和数据 2. Distinguish between system software and application software. 区分两大类软件:系统软件和应用软件 3. Discuss the three kinds of system software programs. 讨论系统软件程序的三种类型4. Distinguish between basic and specialized application software. 认清通用应用软件和特殊用途的应用软件;5. Identify the four types of computers and the four types of microcomputers. 识别计算机的四种类型和四种微机类型 。6. Describe the different types of computer hardware including the system unit, input, output, storage, and communication devices. 描述计算机硬件的不同种类包括:系统单元,输入,输出,存储和通信设备。 7. Define data and describe document, worksheet, database, and presentation files. 定义数据的概念,介绍文档文件,电子表单文件,数据库和演示文件类型 。8. Explain computer connectivity, the wireless revolution, the Internet, smartphone, and cloud computing. 解释了计算机连接、无线革命、因特网、智能手机和云计算。;Keyword P22;document fileend userflash memory cardhandheld computerhard diskHardwareHigh-definition (hi def ) discInformationinformation systeminformation technology;输入设备因特网键盘手提电脑大型机内存微型机微处理器中型机小型计算机调制解调器监视器

;MousenetbookNetworknotebook computeroperating systemoptical diskoutput devicepalm computerPeoplepersonal digital assistant(PDA)presentation fileprimary storagePrinterProceduresProgram;随机存储器辅存平板电脑智能手机软件固态硬盘固态存储专门应用软件巨型机系统软件系统单元平板电脑 传统平板电脑U盘实用程序网无线革命工作表;Introduction;Microcomputers are common tools in all areas of life. Writers write, artists draw, engineers and scientists calculate—all on microcomputers. Students and businesspeople do all this, and more.

New forms of learning have developed. People who are homebound, who work odd hours, or who travel frequently may take online courses. A college course need not fit within a quarter or a semester.

New ways to communic

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
import requests import os from bs4 import BeautifulSoup class book_spider(): def __init__(self,root_url): self.root_url=root_url self.book_list=[] #一级页面中获取的数据(二级页面地址)存放于此列表 self.chapter_list=[] #二级页面中获取的数据(三级页面地址和章节名)存放于此列表 def get_url(url): while True: try: res=requests.get(url) if res.status_code==200: res.encoding =res.apparent_encoding print("页面获取成功") return res.text else: print("页面返回异常",res.status_code) except: print("页面获取错误") def get_book_list(self.url): res = self.get_url(url) html = BeautifulSoup(res,"html.parser") a_list = html.find_all("a",{"class":"name"}) for a in a_list: self.book_list.append(a["href"]) self.book_list = [self.root_url+i for i in self.bbok_list] self.book_list.remove('http://10.1.88.252:7000/庆余年') print(book_list) def get_chapter_list(self,url): res = self.get_url(url) html = BeautifulSoup(res,"html.parser") a_list = html.find_all("a",{"class":"chapter"}) for a in a_list: self.chapter_list.append((a["href"],a.text.replace("\n",""))) def get_content(self.chapter): url = self.root_url + chapter[0] print(url) book_name = chapter[0].split("/")[1] print(book_name) if not os.path.exists(book_name): os.mkdir(book_name) res = self.get_url(url) html = BeautifulSoup(res,"html.parser") content = html.find("div",{"id":"content"}).text print(content) path = os.path.join(book_name,chapter[1]) with open(path,"w",encoding="utf8") as f: f.write(content) def main(): self.get_book_list(self.root_url) for book in self.book_list: self.get_chapter_liat(book) for chapter in chapter_list: self.get_content(chapter) book_s = book_spider("http://10.1.88.252:7000") book_s.main()这是一段爬虫代码,找出里面的错误并改正
06-02
有以下几个错误: 1. 在 `get_url` 方法中,需要在 `self` 参数之前添加 `@staticmethod` 装饰器,使其成为静态方法。 2. 在 `get_book_list` 方法中,`print(book_list)` 应该改为 `print(self.book_list)`,因为 `book_list` 是类成员变量。 3. 在 `get_book_list` 方法中,将 `self.bbok_list` 改为 `self.book_list`。 4. 在 `get_chapter_list` 方法中,将 `self.chapter_list.append((a["href"],a.text.replace("\n","")))` 改为 `self.chapter_list.append((self.root_url+a["href"],a.text.replace("\n","")))`,因为需要获取完整的章节链接。 5. 在 `get_content` 方法中,将 `print(url)` 和 `print(book_name)` 改为注释,因为这些是调试信息。 改正后的代码如下: ``` import requests import os from bs4 import BeautifulSoup class book_spider(): def __init__(self,root_url): self.root_url=root_url self.book_list=[] self.chapter_list=[] @staticmethod def get_url(url): while True: try: res=requests.get(url) if res.status_code==200: res.encoding =res.apparent_encoding print("页面获取成功") return res.text else: print("页面返回异常",res.status_code) except: print("页面获取错误") def get_book_list(self,url): res = self.get_url(url) html = BeautifulSoup(res,"html.parser") a_list = html.find_all("a",{"class":"name"}) for a in a_list: self.book_list.append(self.root_url + a["href"]) self.book_list.remove('http://10.1.88.252:7000/庆余年') print(self.book_list) def get_chapter_list(self,url): res = self.get_url(url) html = BeautifulSoup(res,"html.parser") a_list = html.find_all("a",{"class":"chapter"}) for a in a_list: self.chapter_list.append((self.root_url+a["href"],a.text.replace("\n",""))) def get_content(self,chapter): url = chapter[0] book_name = chapter[0].split("/")[1] if not os.path.exists(book_name): os.mkdir(book_name) res = self.get_url(url) html = BeautifulSoup(res,"html.parser") content = html.find("div",{"id":"content"}).text path = os.path.join(book_name,chapter[1]) with open(path,"w",encoding="utf8") as f: f.write(content) def main(self): self.get_book_list(self.root_url) for book in self.book_list: self.get_chapter_list(book) for chapter in self.chapter_list: self.get_content(chapter) book_s = book_spider("http://10.1.88.252:7000") book_s.main() ```

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值