毕业设计|Python网络爬虫与推荐算法的新闻推荐平台

作者简介:Java领域优质创作者、CSDN博客专家 、CSDN内容合伙人、掘金特邀作者、阿里云博客专家、51CTO特邀作者、多年架构师设计经验、腾讯课堂常驻讲师

主要内容:Java项目、Python项目、前端项目、人工智能与大数据、简历模板、学习资料、面试题库、技术互助

收藏点赞不迷路  关注作者有好处

文末获取源码 

项目编号:L-BS-PY-02

一,项目简介

网络爬虫:通过Python实现新浪新闻的爬取,可爬取新闻页面上的标题、文本、图片、视频链接(保留排版) 推荐算法:权重衰减+标签推荐+区域推荐+热点推荐

  • 权重衰减进行用户兴趣标签权重的衰减,避免内容推荐的过度重复
  • 标签推荐进行用户标签与新闻标签的匹配,按照匹配比例进行新闻的推荐
  • 区域推荐进行IP区域确定,匹配区域性文章进行推荐
  • 热点推荐进行新闻热点的计算的依据是新闻阅读量、新闻评论量、新闻发布时间

涉及框架:Django、jieba、selenium、BeautifulSoup、vue.js

二,系统展示

2.1 软件功能结构

2.2 页面展示

用户端 

输入图片说明

输入图片说明

输入图片说明

输入图片说明

输入图片说明

管理端 

输入图片说明

输入图片说明

输入图片说明

三,核心代码展示

3.1 获取所有的新闻列表

import datetime
import json
import time
from django.core import serializers
from django.db.models import Q
from django.http import JsonResponse

from news_api.models import newsdetail, recommend, newshot, newssimilar, history, comments, user, givelike, message


def all_news(request):
    '''
        @Description:获取所有新闻
        @:param None
    '''
    if request.method == "GET":
        newslist = serializers.serialize("json", newsdetail.objects.all().order_by('-news_id'))
        response = JsonResponse({"status": 100, "newslist": newslist})
        response["Access-Control-Allow-Origin"] = "*"
        response["Access-Control-Allow-Credentials"] = "true"
        response["Access-Control-Allow-Methods"] = "GET,POST"
        response["Access-Control-Allow-Headers"] = "Origin,Content-Type,Cookie,Accept,Token"
        response["Cache-Control"] = "no-cache"
        return response


def del_news(request):
    '''
        @Description:删除指定新闻
        @:param url---指定新闻url
    '''
    if request.method == "GET":
        url = request.GET.get('url')
        # print(user.objects.filter(userid=userid).delete()[0])
        if newsdetail.objects.filter(url=url).delete()[0] == 0:
            return JsonResponse({"status": "100", "message": "Fail."})
        else:
            return JsonResponse({"status": "100", "message": "Success."})


def reconewsbytags(request):
    '''
        @Description:推送用户推荐新闻集
        @:param userid---用户id
    '''
    if request.method == "GET":
        userid = request.GET.get('userid')
        newsidlist = recommend.objects.filter(userid=userid, headread=0)
        newsdetaillist = list()
        for news in newsidlist:
            newsdetaillist.append(serializers.serialize("json", newsdetail.objects.filter(news_id=news.newsid)))
        response = JsonResponse({"status": 100, "newsidlist": newsdetaillist})
        return response


def reconewsbysimilar(request):
    '''
        @Description:推送相似新闻集
        @:param
    '''
    if request.method == "GET":
        newsid = request.GET.get('newsid')
        newsidlist = newssimilar.objects.filter(new_id_base=newsid).order_by('-new_correlation')[:5]
        newsdetaillist = list()
        for news in newsidlist:
            detail = newsdetail.objects.filter(news_id=news.new_id_sim)
            data = {
                'newsid': detail[0].news_id,
                'title': detail[0].title,
                'pic_url': detail[0].pic_url,
                'mainpage': detail[0].mainpage,
            }
            newsdetaillist.append(data)
            # newsdetaillist.append(serializers.serialize("json", newsdetail.objects.filter(news_id=news.new_id_sim)))
        response = JsonResponse({"status": 100, "newslist": newsdetaillist})
        return response


def typenews(request):
    '''
        @Description:推送各类别新闻集
        @:param typeid---类别id
    '''
    if request.method == "GET":
        typeid = request.GET.get('type')
        newsidlist = newshot.objects.filter(category=typeid).order_by('-news_hot')
        newsdetaillist = list()
        for news in newsidlist:
            newsdetaillist.append(serializers.serialize("json", newsdetail.objects.filter(news_id=news.news_id)))
        response = JsonResponse({"status": 100, "newslist": newsdetaillist})
        return response


def reconewsbyregion(request):
    '''
        @Description:通过ip地址进行地域关键词推荐
        @:param
        @:param
    '''
    if request.method == "GET":
        userid = request.GET.get('userid')
        user = userid.objects.filter(userid=userid)
        ip = user.ip


def getpicture(request):
    '''
        @Description:获取热度较高的四个新闻的图片以及Newsid
    '''
    if request.method == "GET":
        newshotlist = newshot.objects.all().order_by('-news_hot')[:10]
        pictlist = list()
        for news in newshotlist:
            temp = newsdetail.objects.filter(news_id=news.news_id).exclude(pic_url='[]')
            if len(temp) > 0:
                url = temp[0].pic_url
                newid = temp[0].news_id
                title = temp[0].title
                pictlist.append({'newsid': newid, 'pic_url': eval(url)[0], 'title': title})
        return JsonResponse({"status": "100", "message": pictlist})


def getNewsDetailByNewsid(request):
    '''
        @Description:通过newsid获取新闻详情
        @:param newsid ----> 新闻id
    '''
    if request.method == "GET":
        newsid = request.GET.get('newsid')
        userid = request.GET.get('userid')
        news = newsdetail.objects.filter(news_id=newsid)[0]
        newsdetail.objects.filter(news_id=newsid).update(readnum=(int(news.readnum) + 1))
        if int(userid) != 100000:
            users = user.objects.filter(userid=userid)[0]
            usertags = users.tags
            usertags = set(usertags.split(','))
            if news.keywords != None:
                newskeywords = set(news.keywords.split(','))
            else:
                newskeywords = set()
            # key = usertags & newskeywords
            # key = list(key)
            weight = eval(users.tagsweight)
            for keyword in newskeywords:
                if keyword in weight:
                    weight[keyword] = float(format(weight[keyword] + 0.01, ".3f"))
                    if weight[keyword] >= 0.1:
                        usertags.add(keyword)
                        user.objects.filter(userid=userid).update(tags=str(",".join(usertags)))
                else:
                    weight[keyword] = 0.01
            print(weight)
            user.objects.filter(userid=userid).update(tagsweight=str(weight).replace("\'", "\""))
            # if len(key) > 0:
            #     weight = eval(users.tagsweight)
            #     weight[key[0]] = weight.get(key[0]) + 0.01
            #     print(weight)
            #     user.objects.filter(userid=userid).update(tagsweight=str(weight).replace("\'", "\""))
        temp = givelike.objects.filter(newsid=newsid, userid=userid)
        print(len(temp))
        if len(temp) == 0:
            liking = 0
        else:
            liking = temp[0].givelikeornot
        newsdetails = {
            "newsid": news.news_id,
            "title": news.title,
            "date": news.date,
            "pic_url": news.pic_url,
            "videourl": news.videourl,
            "category": news.category,
            "readnum": int(news.readnum) + 1,
            "comments": news.comments,
            "origin": news.origin,
            "givelike": liking,
        }
        return JsonResponse({"status": "100", "message": newsdetails})


def all_news_to_page(request):
    '''
        @Description:获取所有新闻
        @:param None
    '''
    if request.method == "GET":
        newslist = serializers.serialize("json", newsdetail.objects.all().order_by('-news_id')[0:100])
        response = JsonResponse({"status": 100, "newslist": newslist})
        response["Access-Control-Allow-Origin"] = "*"
        response["Access-Control-Allow-Credentials"] = "true"
        response["Access-Control-Allow-Methods"] = "GET,POST"
        response["Access-Control-Allow-Headers"] = "Origin,Content-Type,Cookie,Accept,Token"
        response["Cache-Control"] = "no-cache"
        return response


def newsHistory(request):
    '''
        @Description:更新用户阅读记录
        @:param userid ---> 用户id
        @:param newsid ---> 新闻id
    '''
    if request.method == "GET":
        userid = request.GET.get('userid')
        newsid = request.GET.get('newsid')
        daytime = time.strftime("%Y-%m-%d %H:%M:%S", time.localtime())
        history.objects.create(userid=userid, history_newsid=newsid, time=daytime)
        return JsonResponse({"status": "200"})


def newsHotRec(request):
    '''
        @Description:获取热点新闻推荐
        @:param userid ---> 用户id
        @:param newsid ---> 新闻id
    '''
    if request.method == "GET":
        hotnewsidlist = newshot.objects.all().order_by('-news_hot')[:5]
        newsdetaillist = list()
        for hotnews in hotnewsidlist:
            detail = newsdetail.objects.filter(news_id=hotnews.news_id)[0]
            data = {
                'newsid': detail.news_id,
                'mainpage': detail.mainpage,
                'title': detail.title,
                'pic_url': detail.pic_url,
            }
            newsdetaillist.append(data)
            # hotnews.news_id
        return JsonResponse({"status": "200", 'newslist': newsdetaillist})


def getComments(request):
    '''
        @Description:获取新闻评论列表
        @:param newsid ---> 新闻id
    '''
    if request.method == "GET":
        newsid = request.GET.get('newsid')
        commentlistdata = comments.objects.filter(newsid=newsid, status="正常")
        commentlist = list()
        for comment in commentlistdata:
            # comment = commentlistdata[commentid]
            # print(commentid)

            User = user.objects.filter(userid=comment.userid)[0]
            userheadPortrait = User.headPortrait
            userName = User.username
            touser = user.objects.filter(userid=comment.touserid)
            ToUser = None
            if len(touser) != 0:
                ToUser = touser[0]
            if ToUser != None:
                toUserHeadPortrait = ToUser.headPortrait
                toUserName = ToUser.username
            else:
                toUserHeadPortrait = None
                toUserName = None

            data = {
                'userid': comment.userid,
                'touserid': comment.touserid,
                'comments': comment.comments,
                'time': comment.time,
                'username': userName,
                'userheadPortrait': userheadPortrait,
                'tousername': toUserName,
                'toUserHeadPortrait': toUserHeadPortrait,
            }
            commentlist.append(data)
        return JsonResponse({"status": "200", 'commentlist': commentlist})


def gethotnews(request):
    '''
        @Description:获取热点新闻排行
    '''
    if request.method == "GET":
        newsidlist = newshot.objects.all().order_by('-news_hot')[0:50
                     ]
        newslist = list()
        for news in newsidlist:
            detail = newsdetail.objects.filter(news_id=news.news_id)
            data = {
                "newsid": detail[0].news_id,
                "title": detail[0].title,
                "date": detail[0].date,
                "pic_url": detail[0].pic_url,
                "mainpage": detail[0].mainpage,
                "category": detail[0].category,
                "readnum": detail[0].readnum,
                "comments": detail[0].comments,
                "hotvalue": news.news_hot,
            }
            newslist.append(data)
    return JsonResponse({"status": "200", 'newslist': newslist})


def updateGiveLike(request):
    '''
        @Description:更新点赞/点踩状态
        @:param newsid --> 新闻ID
        @:param userid --> 用户ID
        @:param like  -->  点击状态 0/1/2
    '''
    if request.method == "GET":
        newsid = request.GET.get('newsid')
        userid = request.GET.get('userid')
        like = request.GET.get('like')
        if int(like) == 1:
            if int(userid) != 100000:
                users = user.objects.filter(userid=userid)[0]
                usertags = users.tags
                news = newsdetail.objects.filter(news_id=newsid)[0]
                usertags = set(usertags.split(','))
                if news.keywords != None:
                    newskeywords = set(news.keywords.split(','))
                else:
                    newskeywords = set()
                key = usertags & newskeywords
                key = list(key)
                if len(key) > 0:
                    weight = eval(users.tagsweight)
                    weight[key[0]] = weight.get(key[0]) + 0.01
                    user.objects.filter(userid=userid).update(tagsweight=str(weight).replace("\'", "\""))
        if int(like) == 2:
            if int(userid) != 100000:
                users = user.objects.filter(userid=userid)[0]
                usertags = users.tags
                news = newsdetail.objects.filter(news_id=newsid)[0]
                usertags = set(usertags.split(','))
                if news.keywords != None:
                    newskeywords = set(news.keywords.split(','))
                else:
                    newskeywords = set()
                for k in newskeywords:
                    weight = eval(users.tagsweight)
                    if k in weight:
                        if weight[k] >= 0.1:
                            weight[k] = float(format(weight.get(k) - 0.1, ".3f"))
                            if weight.get(k) > 0:
                                user.objects.filter(userid=userid).update(tagsweight=str(weight).replace("\'", "\""))
                            else:
                                weight.pop(k)
                                print('weight', weight)
                                user.objects.filter(userid=userid).update(tagsweight=str(weight).replace("\'", "\""))
                                usertags.remove(k)
                                newusertags = ','.join(usertags)
                                user.objects.filter(userid=userid).update(tags=newusertags)
        selectres = givelike.objects.filter(userid=userid, newsid=newsid)
        if len(selectres) == 0:
            givelike(userid=userid, newsid=newsid, givelikeornot=like).save()
        else:
            selectres.update(userid=userid, newsid=newsid, givelikeornot=like)
        return JsonResponse({"status": "200", 'message': 'Success.'})
    else:
        return JsonResponse({"status": "200", 'message': 'Fail.'})


def submitComments(request):
    '''
        @Description:提交新闻评论
        @:param userid --> 提交用户ID
        @:param newsid --> 新闻ID
        @:param comment --> 评论内容
    '''
    if request.method == "POST":
        req = json.loads(request.body)
        print(req)
        userid = req['userid']
        newsid = req['newsid']
        comment = req['comment']
        # print('comment', comment)
        if int(userid) != 100000:
            print()
            users = user.objects.filter(userid=userid)[0]
            usertags = users.tags
            news = newsdetail.objects.filter(news_id=newsid)[0]
            usertags = set(usertags.split(','))
            if news.keywords != None:
                newskeywords = set(news.keywords.split(','))
            else:
                newskeywords = set()
            key = usertags & newskeywords
            key = list(key)
            if len(key) > 0:
                weight = eval(users.tagsweight)
                weight[key[0]] = weight.get(key[0]) + 0.01
                print(weight)
                user.objects.filter(userid=userid).update(tagsweight=str(weight).replace("\'", "\""))
        time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
        comments(userid=userid, newsid=newsid, comments=comment, time=time, status="正常").save()
        newsdetail.objects.filter(news_id=newsid).update(
            comments=int(newsdetail.objects.filter(news_id=newsid)[0].comments) + 1)
        return JsonResponse({"status": "200", 'message': 'Success.'})


def submitCommenttoUser(request):
    '''
        @Description:对用户评论进行回复
        @:param userid --> 评论用户ID
        @:param newsid --> 新闻ID
        @:param comment --> 评论内容
        @:param touserid --> 被回复用户ID
    '''
    if request.method == "POST":
        req = json.loads(request.body)
        print(req)
        userid = req['userid']
        newsid = req['newsid']
        comment = req['comment']
        touserid = req['touserid']
        if int(userid) != 100000:
            print()
            users = user.objects.filter(userid=userid)[0]
            usertags = users.tags
            news = newsdetail.objects.filter(news_id=newsid)[0]
            usertags = set(usertags.split(','))
            if news.keywords != None:
                newskeywords = set(news.keywords.split(','))
            else:
                newskeywords = set()
            key = usertags & newskeywords
            key = list(key)
            if len(key) > 0:
                weight = eval(users.tagsweight)
                weight[key[0]] = weight.get(key[0]) + 0.01
                print(weight)
                user.objects.filter(userid=userid).update(tagsweight=str(weight).replace("\'", "\""))
        time = datetime.datetime.now().strftime('%Y-%m-%d %H:%M:%S')
        sendMessage = "新的回复了!!请速速查看!!"
        comments(userid=userid, newsid=newsid, comments=comment, time=time, touserid=touserid, status="正常").save()
        message(userid=touserid, message=sendMessage, time=time, newsid=newsid, title="收到回复", hadread=0).save()
        return JsonResponse({"status": "200", 'message': 'Success.'})


def getManageHomeData(request):
    '''
        @Description:获取管理端主页数据
        @:param None
    '''
    if request.method == "GET":
        readnum = len(history.objects.all())
        userlist = user.objects.all()
        usernum = len(userlist)
        newsnum = len(newsdetail.objects.all())
        regionlist = dict()
        for us in userlist:
            if regionlist.get(us.region) == None:
                regionlist[us.region] = 1
            else:
                regionlist[us.region] = regionlist[us.region] + 1
        reclist = recommend.objects.filter(hadread=1)
        recnum = len(recommend.objects.all())
        statistical = dict()
        for rec in reclist:
            if statistical.get(rec.time) == None:
                statistical[rec.time] = 1
            else:
                statistical[rec.time] = statistical[rec.time] + 1
        comnum = len(comments.objects.all())
        likenum = len(givelike.objects.filter(givelikeornot=1))
        data = {
            'usernum': usernum,
            'readnum': readnum,
            'newsnum': newsnum,
            'recnum': recnum,
            'comnum': comnum,
            'statistical': statistical,
            'likenum': likenum,
            'regionlist': regionlist,
        }
        return JsonResponse({"status": "200", 'message': data})


def updateRecHis(request):
    '''
        @Description:更新推荐列表阅读历史/更改推荐新闻已读状态
        @:param userid --> 用户ID
        @:param newsid --> 新闻ID
    '''
    if request.method == "GET":
        userid = request.GET.get('userid')
        newsid = request.GET.get('newsid')
        recommend.objects.filter(newsid=newsid, userid=userid).update(hadread=1)
        return JsonResponse({"status": "200", 'message': 'Success.'})
    return JsonResponse({"status": "200", 'message': 'Fail.'})


def searchNews(request):
    '''
        @Description:管理端搜索新闻(模糊搜索)
        @:param keyword --> 搜索关键词
    '''
    if request.method == "GET":
        keyword = request.GET.get('keyword')
        newslist = newsdetail.objects.filter(Q(title__contains=keyword) | Q(mainpage__contains=keyword))
        response = JsonResponse({"status": 100, "newslist": serializers.serialize("json", newslist)})
        return response

四,相关作品展示

基于Java开发、Python开发、PHP开发、C#开发等相关语言开发的实战项目

基于Nodejs、Vue等前端技术开发的前端实战项目

基于微信小程序和安卓APP应用开发的相关作品

基于51单片机等嵌入式物联网开发应用

基于各类算法实现的AI智能应用

基于大数据实现的各类数据管理和推荐系统

 

 

  • 7
    点赞
  • 1
    收藏
    觉得还不错? 一键收藏
  • 打赏
    打赏
  • 0
    评论
网络爬虫:通过Python实现新浪新闻的爬取,可爬取新闻页面上的标题、文本、图片、视频链接(保留排版) 推荐算法:权重衰减+标签推荐+区域推荐+热点推荐 爬虫(Web Crawler)是一种自动化程序,用于从互联网上收集信息。其主要功能是访问网页、提取数据并存储,以便后续分析或展示。爬虫通常由搜索引擎、数据挖掘工具、监测系统等应用于网络数据抓取的场景。 爬虫的工作流程包括以下几个关键步骤: URL收集: 爬虫从一个或多个初始URL开始,递归或迭代地发现新的URL,构建一个URL队列。这些URL可以通过链接分析、站点地图、搜索引擎等方式获取。 请求网页: 爬虫使用HTTP或其他协议向目标URL发起请求,获取网页的HTML内容。这通常通过HTTP请求库实现,如Python中的Requests库。 解析内容: 爬虫对获取的HTML进行解析,提取有用的信息。常用的解析工具有正则表达式、XPath、Beautiful Soup等。这些工具帮助爬虫定位和提取目标数据,如文本、图片、链接等。 数据存储: 爬虫将提取的数据存储到数据库、文件或其他存储介质中,以备后续分析或展示。常用的存储形式包括关系型数据库、NoSQL数据库、JSON文件等。 遵守规则: 为避免对网站造成过大负担或触发反爬虫机制,爬虫需要遵守网站的robots.txt协议,限制访问频率和深度,并模拟人类访问行为,如设置User-Agent。 反爬虫应对: 由于爬虫的存在,一些网站采取了反爬虫措施,如验证码、IP封锁等。爬虫工程师需要设计相应的策略来应对这些挑战。 爬虫在各个领域都有广泛的应用,包括搜索引擎索引、数据挖掘、价格监测、新闻聚合等。然而,使用爬虫需要遵守法律和伦理规范,尊重网站的使用政策,并确保对被访问网站的服务器负责。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包

打赏作者

qq_469603589

你的鼓励将是我创作的最大动力

¥1 ¥2 ¥4 ¥6 ¥10 ¥20
扫码支付:¥1
获取中
扫码支付

您的余额不足,请更换扫码支付或充值

打赏作者

实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值