欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

Python爬虫入门教程 32-100 B站博人传评论数据抓取 scrapy

程序员文章站 2023-02-21 08:47:26
1. B站博人传评论数据爬取简介 今天想了半天不知道抓啥,去B站看跳舞的小姐姐,忽然看到了评论,那就抓取一下B站的评论数据,视频动画那么多,也不知道抓取哪个,选了一个博人传跟火影相关的,抓取看看。网址: 在这个网页看到了18560条短评,数据量也不大,抓取看看,使用的还是scrapy。 2. B站博 ......

1. b站博人传评论数据爬取简介

今天想了半天不知道抓啥,去b站看跳舞的小姐姐,忽然看到了评论,那就抓取一下b站的评论数据,视频动画那么多,也不知道抓取哪个,选了一个博人传跟火影相关的,抓取看看。网址: https://www.bilibili.com/bangumi/media/md5978/?from=search&seid=16013388136765436883#short
在这个网页看到了18560条短评,数据量也不大,抓取看看,使用的还是scrapy。

Python爬虫入门教程 32-100 B站博人传评论数据抓取 scrapy

Python爬虫入门教程 32-100 B站博人传评论数据抓取 scrapy

2. b站博人传评论数据案例---获取链接

从开发者工具中你能轻易的得到如下链接,有链接之后就好办了,如何创建项目就不在啰嗦了,我们直接进入主题。
Python爬虫入门教程 32-100 B站博人传评论数据抓取 scrapy

我在代码中的parse函数中,设定了两个yield一个用来返回items 一个用来返回requests
然后实现一个新的功能,每次访问切换ua,这个点我们需要使用到中间件技术。

class borenspider(scrapy.spider):
    base_url = "https://bangumi.bilibili.com/review/web_api/short/list?media_id=5978&folded=0&page_size=20&sort=0&cursor={}"
    name = 'boren'
    allowed_domains = ['bangumi.bilibili.com']

    start_urls = [base_url.format("76742479839522")]

    def parse(self, response):
        print(response.url)
        resdata = json.loads(response.body_as_unicode())

        if resdata["code"] == 0:
            # 获取最后一个数据
            if len(resdata["result"]["list"]) > 0:
                data = resdata["result"]["list"]
                cursor = data[-1]["cursor"]
                for one in data:
                    item = borenzhuanitem()

                    item["author"]  = one["author"]["uname"]
                    item["content"] = one["content"]
                    item["ctime"] = one["ctime"]
                    item["disliked"] = one["disliked"]
                    item["liked"] = one["liked"]
                    item["likes"] = one["likes"]
                    item["user_season"] = one["user_season"]["last_ep_index"] if "user_season" in one else ""
                    item["score"] = one["user_rating"]["score"]
                    yield item

            yield scrapy.request(self.base_url.format(cursor),callback=self.parse)

3. b站博人传评论数据案例---实现随机ua

第一步, 在settings文件中添加一些useragent,我从互联网找了一些

user_agent_list=[
    "mozilla/5.0 (windows nt 6.1; wow64) applewebkit/537.1 (khtml, like gecko) chrome/22.0.1207.1 safari/537.1",
    "mozilla/5.0 (x11; cros i686 2268.111.0) applewebkit/536.11 (khtml, like gecko) chrome/20.0.1132.57 safari/536.11",
    "mozilla/5.0 (windows nt 6.1; wow64) applewebkit/536.6 (khtml, like gecko) chrome/20.0.1092.0 safari/536.6",
    "mozilla/5.0 (windows nt 6.2) applewebkit/536.6 (khtml, like gecko) chrome/20.0.1090.0 safari/536.6",
    "mozilla/5.0 (windows nt 6.2; wow64) applewebkit/537.1 (khtml, like gecko) chrome/19.77.34.5 safari/537.1",
    "mozilla/5.0 (x11; linux x86_64) applewebkit/536.5 (khtml, like gecko) chrome/19.0.1084.9 safari/536.5",
    "mozilla/5.0 (windows nt 6.0) applewebkit/536.5 (khtml, like gecko) chrome/19.0.1084.36 safari/536.5",
    "mozilla/5.0 (windows nt 6.1; wow64) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1063.0 safari/536.3",
    "mozilla/5.0 (windows nt 5.1) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1063.0 safari/536.3",
    "mozilla/4.0 (compatible; msie 7.0; windows nt 5.1; trident/4.0; se 2.x metasr 1.0; se 2.x metasr 1.0; .net clr 2.0.50727; se 2.x metasr 1.0)",
    "mozilla/5.0 (windows nt 6.2) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1062.0 safari/536.3",
    "mozilla/5.0 (windows nt 6.1; wow64) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1062.0 safari/536.3",
    "mozilla/4.0 (compatible; msie 7.0; windows nt 5.1; 360se)",
    "mozilla/5.0 (windows nt 6.1; wow64) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1061.1 safari/536.3",
    "mozilla/5.0 (windows nt 6.1) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1061.1 safari/536.3",
    "mozilla/5.0 (windows nt 6.2) applewebkit/536.3 (khtml, like gecko) chrome/19.0.1061.0 safari/536.3",
    "mozilla/5.0 (x11; linux x86_64) applewebkit/535.24 (khtml, like gecko) chrome/19.0.1055.1 safari/535.24",
    "mozilla/5.0 (windows nt 6.2; wow64) applewebkit/535.24 (khtml, like gecko) chrome/19.0.1055.1 safari/535.24"
]

第二步,在settings文件中设置 “downloader_middlewares”

# enable or disable downloader middlewares
# see https://doc.scrapy.org/en/latest/topics/downloader-middleware.html
downloader_middlewares = {
   #'borenzhuan.middlewares.borenzhuandownloadermiddleware': 543,
    'borenzhuan.middlewares.randomuseragentmiddleware': 400,
}

第三步,在 middlewares.py 文件中导入 settings模块中的 user_agent_list 方法

from borenzhuan.settings import user_agent_list # 导入中间件
import random

class randomuseragentmiddleware(object):
    def process_request(self, request, spider):
        rand_use  = random.choice(user_agent_list)
        if rand_use:
            request.headers.setdefault('user-agent', rand_use)

好了,随机的ua已经实现,你可以在parse函数中编写如下代码进行测试

print(response.request.headers)

4. b站博人传评论数据----完善item

这个操作相对简单,这些数据就是我们要保存的数据了。!

   author = scrapy.field()
    content = scrapy.field()
    ctime = scrapy.field()
    disliked = scrapy.field()
    liked = scrapy.field()
    likes = scrapy.field()
    score = scrapy.field()
    user_season = scrapy.field()

5. b站博人传评论数据案例---提高爬取速度

在settings.py中设置如下参数:

# configure maximum concurrent requests performed by scrapy (default: 16)
concurrent_requests = 32
# configure a delay for requests for the same website (default: 0)
# see https://doc.scrapy.org/en/latest/topics/settings.html#download-delay
# see also autothrottle settings and docs
download_delay = 1
# the download delay setting will honor only one of:
concurrent_requests_per_domain = 16
concurrent_requests_per_ip = 16
# disable cookies (enabled by default)
cookies_enabled = false

解释说明

一、降低下载延迟

download_delay = 0

将下载延迟设为0,这时需要相应的防ban措施,一般使用user agent轮转,构建user agent池,轮流选择其中之一来作为user agent。

二、多线程

concurrent_requests = 32
concurrent_requests_per_domain = 16
concurrent_requests_per_ip = 16

scrapy网络请求是基于twisted,而twisted默认支持多线程,而且scrapy默认也是通过多线程请求的,并且支持多核cpu的并发,我们通过一些设置提高scrapy的并发数可以提高爬取速度。

三、禁用cookies

cookies_enabled = false

6. b站博人传评论数据案例---保存数据

最后在pipelines.py 文件中,编写保存代码即可

import os
import csv

class borenzhuanpipeline(object):


    def __init__(self):
        store_file = os.path.dirname(__file__)+'/spiders/bore.csv'
        self.file = open(store_file,"a+",newline="",encoding="utf-8")
        self.writer = csv.writer(self.file)

    def process_item(self, item, spider):
        try:

            self.writer.writerow((
                item["author"],
                item["content"],
                item["ctime"],
                item["disliked"],
                item["liked"],
                item["likes"],
                item["score"],
                item["user_season"]
            ))

        except exception as e:
            print(e.args)

        def close_spider(self, spider):
            self.file.close()

运行代码之后,发现过了一会报错了
Python爬虫入门教程 32-100 B站博人传评论数据抓取 scrapy

去看了一眼,原来是数据爬取完毕~!!!


Python爬虫入门教程 32-100 B站博人传评论数据抓取 scrapy