欢迎您访问程序员文章站本站旨在为大家提供分享程序员计算机编程知识!
您现在的位置是: 首页  >  IT编程

Python实现爬取亚马逊数据并打印出Excel文件操作示例

程序员文章站 2023-11-28 21:11:10
本文实例讲述了python实现爬取亚马逊数据并打印出excel文件操作。分享给大家供大家参考,具体如下: python大神们别喷,代码写的很粗糙,主要是完成功能,能够借鉴...

本文实例讲述了python实现爬取亚马逊数据并打印出excel文件操作。分享给大家供大家参考,具体如下:

python大神们别喷,代码写的很粗糙,主要是完成功能,能够借鉴就看下吧,我是学java的,毕竟不是学python的,自己自学看了一点点python,望谅解。

#!/usr/bin/env python3
# encoding=utf-8
import sys
import re
import urllib.request
import json
import time
import zlib
from html import unescape
import threading
import os
import xlwt
import math
import requests
#例如这里设置递归为一百万
sys.setrecursionlimit(1000000000)
##获取所有列别
def getprourl():
  urllist = []
  headers = {"user-agent":"mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/50.0.2661.102 safari/537.36"}
  session = requests.session()
  furl="https://www.amazon.cn/?tag=baidu250-23&hvadid={creative}&ref=pz_ic_22fvxh4dwf_e&page="
  for i in range(0,1):
    html=""
    html = session.post(furl+str(i),headers = headers)
    html.encoding = 'utf-8'
    s=html.text.encode('gb2312','ignore').decode('gb2312')
    url=r'</li><li id=".*?" data-asin="(.+?)" class="s-result-item celwidget">'
    reg=re.compile(url,re.m)
    name='"category" : "' + '(.*?)' + '"'
    reg1=re.compile(name,re.s)
    urllist = reg1.findall(html.text)
    return urllist
##根据类别获取数据链接
def geturldata(ci):
   url="https://www.amazon.cn/s/ref=nb_sb_noss_2?__mk_zh_cn=%e4%ba%9a%e9%a9%ac%e9%80%8a%e7%bd%91%e7%ab%99&url=search-alias%3daps&field-keywords="+ci+"&page=1&sort=review-rank"
   return url
##定时任务,等待1秒在进行
def fun_timer():
  time.sleep(3)
##根据链接进行查询每个类别的网页内容
def getprodata(allurllist):
  webcontenthtmllist = []
  headers = {"user-agent": "mozilla/5.0 (windows nt 10.0; wow64) applewebkit/537.36 (khtml, like gecko) chrome/50.0.2661.102 safari/537.36"}
  for ci in allurllist:
    session = requests.session()
    fun_timer()
    html = session.get(geturldata(ci),headers = headers)
    # 设置编码
    html.encoding = 'utf-8'
    html.text.encode('gb2312', 'ignore').decode('gb2312')
    gxg = r'</li><li id=".*?" data-asin="(.+?)" class="s-result-item celwidget">'
    reg = re.compile(gxg, re.m)
    items = reg.findall(html.text)
    print(html.text)
    webcontenthtmllist.append(html.text)
  return webcontenthtmllist
##根据网页内容过滤需要的属性和值
def getprovalue():
  list1 = [] * 5
  list2 = [] * 5
  list3 = [] * 5
  list4 = [] * 5
  list5 = [] * 5
  list6 = [] * 5
  list7 = [] * 5
  list8 = [] * 5
  urllist = getprourl();
  urllist.remove('全部分类')
  urllist.remove('prime会员优先购')
  index = 0
  for head in urllist:
    if index >= 0 and index < 5:
      list1.append(head)
      index = index + 1
    if index >= 5 and index < 10:
      list2.append(head)
      index = index + 1
    if index >= 10 and index < 15:
      list3.append(head)
      index = index + 1
    if index >= 15 and index < 20:
      list4.append(head)
      index = index + 1
    if index >= 20 and index < 25:
      list5.append(head)
      index = index + 1
    if index >= 25 and index < 30:
      list6.append(head)
      index = index + 1
    if index >= 30 and index < 35:
      list7.append(head)
      index = index + 1
    if index >= 35 and index < 40:
      list8.append(head)
      index = index + 1
  webcontenthtmllist1 = []
  webcontenthtmllist1 = getprodata(list1)
  webcontenthtmllist2 = []
  webcontenthtmllist2 = getprodata(list2)
  webcontenthtmllist3 = []
  webcontenthtmllist3 = getprodata(list3)
  webcontenthtmllist4 = []
  webcontenthtmllist4 = getprodata(list4)
  webcontenthtmllist5 = []
  webcontenthtmllist5 = getprodata(list5)
  webcontenthtmllist6 = []
  webcontenthtmllist6 = getprodata(list6)
  webcontenthtmllist7 = []
  webcontenthtmllist7 = getprodata(list7)
  webcontenthtmllist8 = []
  webcontenthtmllist8 = getprodata(list8)
  ##存储所有数据的集合
  datatwoalllist1 = []
  print("开始检索数据,检索数据中..........")
  ##网页内容1
  for html in webcontenthtmllist1:
    for i in range(15):
      datalist = []
      datalist.append(unescape(getprocategory(html,i)))
      datalist.append(unescape(getprotitle(html,i)))
      datalist.append(getproprice(html,i))
      datalist.append(getsellercount(html,i))
      datalist.append(getprostar(html,i))
      datalist.append(getprocommentcount(html,i))
      print(datalist)
      datatwoalllist1.append(datalist)
  ##网页内容2
  for html in webcontenthtmllist2:
    for i in range(15):
      datalist = []
      datalist.append(unescape(getprocategory(html,i)))
      datalist.append(unescape(getprotitle(html,i)))
      datalist.append(getproprice(html,i))
      datalist.append(getsellercount(html,i))
      datalist.append(getprostar(html,i))
      datalist.append(getprocommentcount(html,i))
      print(datalist)
      datatwoalllist1.append(datalist)
  ##网页内容3
  for html in webcontenthtmllist3:
    for i in range(15):
      datalist = []
      datalist.append(unescape(getprocategory(html,i)))
      datalist.append(unescape(getprotitle(html,i)))
      datalist.append(getproprice(html,i))
      datalist.append(getsellercount(html,i))
      datalist.append(getprostar(html,i))
      datalist.append(getprocommentcount(html,i))
      print(datalist)
      datatwoalllist1.append(datalist)
  ##网页内容4
  for html in webcontenthtmllist4:
    for i in range(15):
      datalist = []
      datalist.append(unescape(getprocategory(html,i)))
      datalist.append(unescape(getprotitle(html,i)))
      datalist.append(getproprice(html,i))
      datalist.append(getsellercount(html,i))
      datalist.append(getprostar(html,i))
      datalist.append(getprocommentcount(html,i))
      print(datalist)
      datatwoalllist1.append(datalist)
  ##网页内容5
  for html in webcontenthtmllist5:
    for i in range(15):
      datalist = []
      datalist.append(unescape(getprocategory(html,i)))
      datalist.append(unescape(getprotitle(html,i)))
      datalist.append(getproprice(html,i))
      datalist.append(getsellercount(html,i))
      datalist.append(getprostar(html,i))
      datalist.append(getprocommentcount(html,i))
      print(datalist)
      datatwoalllist1.append(datalist)
  ##网页内容6
  for html in webcontenthtmllist6:
    for i in range(15):
      datalist = []
      datalist.append(unescape(getprocategory(html,i)))
      datalist.append(unescape(getprotitle(html,i)))
      datalist.append(getproprice(html,i))
      datalist.append(getsellercount(html,i))
      datalist.append(getprostar(html,i))
      datalist.append(getprocommentcount(html,i))
      print(datalist)
      datatwoalllist1.append(datalist)
  ##网页内容7
  for html in webcontenthtmllist7:
    for i in range(15):
      datalist = []
      datalist.append(unescape(getprocategory(html,i)))
      datalist.append(unescape(getprotitle(html,i)))
      datalist.append(getproprice(html,i))
      datalist.append(getsellercount(html,i))
      datalist.append(getprostar(html,i))
      datalist.append(getprocommentcount(html,i))
      print(datalist)
      datatwoalllist1.append(datalist)
  ##网页内容8
  for html in webcontenthtmllist8:
    for i in range(15):
      datalist = []
      datalist.append(unescape(getprocategory(html,i)))
      datalist.append(unescape(getprotitle(html,i)))
      datalist.append(getproprice(html,i))
      datalist.append(getsellercount(html,i))
      datalist.append(getprostar(html,i))
      datalist.append(getprocommentcount(html,i))
      print(datalist)
      datatwoalllist1.append(datalist)
  print("检索数据完成!!!!")
  print("开始保存并打印excel文档数据!!!!")
  ##保存文档
  createtable(time.strftime("%y%m%d") + '亚马逊销量数据统计.xls', datatwoalllist1)
##抽取类别
def getprocategory(html,i):
    i = 0;
    name = '<span class="a-color-state a-text-bold">' + '(.*?)' + '</span>'
    reg=re.compile(name,re.s)
    items = reg.findall(html)
    if len(items)==0:
      return ""
    else:
      if i<len(items):
        return items[i]
      else:
        return ""
##抽取标题
def getprotitle(html,i):
  html = gethtmlbyid(html,i)
  name = '<a class="a-link-normal s-access-detail-page s-color-twister-title-link a-text-normal" target="_blank" title="' + '(.*?)' + '"'
  reg=re.compile(name,re.s)
  items = reg.findall(html)
  if len(items)==0:
    return ""
  else:
    return items[0]
##抽取价格<a class="a-link-normal s-access-detail-page s-color-twister-title-link a-text-normal" target="_blank" title="
def getproprice(html,i):
  html = gethtmlbyid(html,i)
  name = '<span class="a-size-base a-color-price s-price a-text-bold">' + '(.*?)' + '</span>'
  reg=re.compile(name,re.s)
  items = reg.findall(html)
  if len(items)==0:
    return "¥0"
  else:
    return items[0]
##抽取卖家统计
def getsellercount(html,i):
  html = gethtmlbyid(html,i)
  name = '<span class="a-color-secondary">' + '(.*?)' + '</span>'
  reg=re.compile(name,re.s)
  items = reg.findall(html)
  if len(items)==0:
    return "(0 卖家)"
  else:
    return checksellercount(items,0)
##检查卖家统计
def checksellercount(items,i):
  result = items[i].find('卖家') >= 0
  if result:
    if len(items[i])<=9:
      return items[i]
    else:
      return '(0 卖家)'
  else:
    if i + 1 < len(items):
      i = i + 1
      result = items[i].find('卖家') >= 0
      if result:
        if len(items[i]) <= 9:
          return items[i]
        else:
          return '(0 卖家)'
        if i + 1 < len(items[i]):
          i = i + 1
          result = items[i].find('卖家') >= 0
          if result:
            if len(items[i]) <= 9:
              return items[i]
            else:
              return '(0 卖家)'
          else:
            return '(0 卖家)'
        else:
          return '(0 卖家)'
      else:
        return '(0 卖家)'
    else:
      return '(0 卖家)'
    return '(0 卖家)'
##抽取星级 <span class="a-icon-alt">
def getprostar(html,i):
  html = gethtmlbyid(html,i)
  name = '<span class="a-icon-alt">' + '(.*?)' + '</span>'
  reg=re.compile(name,re.s)
  items = reg.findall(html)
  if len(items)==0:
    return "平均 0 星"
  else:
    return checkprostar(items,0)
##检查星级
def checkprostar(items,i):
  result = items[i].find('星') >= 0
  if result:
      return items[i]
  else:
    if i + 1 < len(items):
      i = i + 1
      result = items[i].find('星') >= 0
      if result:
        return items[i]
      else:
        return '平均 0 星'
    else:
      return '平均 0 星'
    return '平均 0 星'
##抽取商品评论数量 销量
##<a class="a-size-small a-link-normal a-text-normal" target="_blank" href="https://www.amazon.cn/dp/b073lbrnv2/ref=sr_1_1?ie=utf8&qid=1521782688&sr=8-1&keywords=%e5%9b%be%e4%b9%a6#customerreviews" rel="external nofollow" >56</a>
def getprocommentcount(html,i):
  name = '<a class="a-size-small a-link-normal a-text-normal" target="_blank" href=".*?#customerreviews" rel="external nofollow" ' + '(.*?)' + '</a>'
  reg=re.compile(name,re.s)
  items = reg.findall(html)
  if len(items)==0:
    return "0"
  else:
    if i<len(items):
      return items[i].strip(">")
    else:
      return "0"
##根据id取出html里面的内容
def get_id_tag(content, id_name):
 id_name = id_name.strip()
 patt_id_tag = """<[^>]*id=['"]?""" + id_name + """['" ][^>]*>"""
 id_tag = re.findall(patt_id_tag, content, re.dotall|re.ignorecase)
 if id_tag:
   id_tag = id_tag[0]
 else:
   id_tag=""
 return id_tag
##缩小范围 定位值
def gethtmlbyid(html,i):
    start = get_id_tag(html,"result_"+str(i))
    i=i+1
    end = get_id_tag(html, "result_" + str(i))
    name = start + '.*?'+end
    reg = re.compile(name, re.s)
    html = html.strip()
    items = reg.findall(html)
    if len(items) == 0:
      return ""
    else:
      return items[0]
##生成word文档
def createtable(tablename,datatwoalllist):
  flag = 1
  results = []
  results.append("类别,标题,价格,卖家统计,星级,评论数")
  columnname = results[0].split(',')
  # 创建一个excel工作簿,编码utf-8,表格中支持中文
  wb = xlwt.workbook(encoding='utf-8')
  # 创建一个sheet
  sheet = wb.add_sheet('sheet 1')
  # 获取行数
  rows = math.ceil(len(datatwoalllist))
  # 获取列数
  columns = len(columnname)
  # 创建格式style
  style = xlwt.xfstyle()
  # 创建font,设置字体
  font = xlwt.font()
  # 字体格式
  font.name = 'times new roman'
  # 将字体font,应用到格式style
  style.font = font
  # 创建alignment,居中
  alignment = xlwt.alignment()
  # 居中
  alignment.horz = xlwt.alignment.horz_center
  # 应用到格式style
  style.alignment = alignment
  style1 = xlwt.xfstyle()
  font1 = xlwt.font()
  font1.name = 'times new roman'
  # 字体颜色(绿色)
  # font1.colour_index = 3
  # 字体加粗
  font1.bold = true
  style1.font = font1
  style1.alignment = alignment
  for i in range(columns):
    # 设置列的宽度
    sheet.col(i).width = 5000
  # 插入列名
  for i in range(columns):
    sheet.write(0, i, columnname[i], style1)
  for i in range(1,rows):
    for j in range(0,columns):
      sheet.write(i, j, datatwoalllist[i-1][j], style)
    wb.save(tablename)
##入口开始
input("按回车键开始导出..........")
fun_timer()
print("三秒后开始抓取数据.......,请等待!")
getprovalue();
print("数据导出成功!请注意查看!")
print("数据文档《亚马逊销量数据统计.xls》已经存于c盘下面的c:\windows\syswow64的该路径下面!!!!")
input()

结果数据:

Python实现爬取亚马逊数据并打印出Excel文件操作示例

打包成exe文件,直接可以点击运行:打包过程我就不一一说了,都是一些命令操作:

要安装pyinstaller,打成exe的操作命令:--inco是图标,路径和项目当前路径一样

途中遇到很多问题,都一一解决了,乱码,ip限制,打包后引入模块找不到,递归最大次数,过滤的一些问题

pyinstaller -f -c --icon=my.ico crawling.py    这是打包命令

Python实现爬取亚马逊数据并打印出Excel文件操作示例

效果图:

Python实现爬取亚马逊数据并打印出Excel文件操作示例

更多关于python相关内容可查看本站专题:《python socket编程技巧总结》、《python正则表达式用法总结》、《python数据结构与算法教程》、《python函数使用技巧总结》、《python字符串操作技巧汇总》、《python入门与进阶经典教程》及《python文件与目录操作技巧汇总

希望本文所述对大家python程序设计有所帮助。