mysql interfaceerror_python - mysql.connector.errors.InterfaceError:2003:无法连接到Scrapinghub上'127.0.0.1...

我尝试在scrapinghub上运行我的蜘蛛,并运行它出现错误

Traceback (most recent call last):

File "/usr/local/lib/python3.6/site-packages/twisted/internet/defer.py", line 1418, in _inlineCallbacks

result = g.send(result)

File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 80, in crawl

self.engine = self._create_engine()

File "/usr/local/lib/python3.6/site-packages/scrapy/crawler.py", line 105, in _create_engine

return ExecutionEngine(self, lambda _: self.stop())

File "/usr/local/lib/python3.6/site-packages/scrapy/core/engine.py", line 70, in __init__

self.scraper = Scraper(crawler)

File "/usr/local/lib/python3.6/site-packages/scrapy/core/scraper.py", line 71, in __init__

self.itemproc = itemproc_cls.from_crawler(crawler)

File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 53, in from_crawler

return cls.from_settings(crawler.settings, crawler)

File "/usr/local/lib/python3.6/site-packages/scrapy/middleware.py", line 35, in from_settings

mw = create_instance(mwcls, settings, crawler)

File "/usr/local/lib/python3.6/site-packages/scrapy/utils/misc.py", line 144, in create_instance

return objcls(*args, **kwargs)

File "/app/__main__.egg/skripsi/pipelines.py", line 19, in __init__

File "/app/__main__.egg/skripsi/pipelines.py", line 29, in create_connection

File "/app/python/lib/python3.6/site-packages/mysql/connector/__init__.py", line 173, in connect

return MySQLConnection(*args, **kwargs)

File "/app/python/lib/python3.6/site-packages/mysql/connector/connection.py", line 104, in __init__

self.connect(**kwargs)

File "/app/python/lib/python3.6/site-packages/mysql/connector/abstracts.py", line 780, in connect

self._open_connection()

File "/app/python/lib/python3.6/site-packages/mysql/connector/connection.py", line 284, in _open_connection

self._socket.open_connection()

File "/app/python/lib/python3.6/site-packages/mysql/connector/network.py", line 532, in open_connection

errno=2003, values=(self.get_address(), _strioerror(err)))

mysql.connector.errors.InterfaceError: 2003: Can't connect to MySQL server on '127.0.0.1:3306' (111 Connection refused)

我试图在requirements.txt在这样scrapinghub.yml添加MySQL连接器,Python和配置我的依赖这

我的requirements.txt

mysql-connector-python

我的scrapinghub.yml

projects:

default: 396892

stacks:

default: scrapy:1.6-py3

requirements:

file: requirements.txt

我的pipelines.py

import mysql.connector

class SkripsiPipeline(object):

def __init__(self):

self.create_connection()

# dispatcher.connect(self.close_spider, signals.close_spider)

# self.create_table()

def create_connection(self):

self.conn = mysql.connector.connect(

host = '127.0.0.1',

password = '',

user = 'root',

database = 'news'

)

self.curr = self.conn.cursor()

def process_item(self, item, spider):

self.store_db(item)

return item

def store_db(self,item):

self.curr.execute("INSERT INTO news_tb (url, title, author, time, crawl_time, imagelink, content) values (%s,%s,%s,%s,%s,%s,%s)",(

item['url'][0],

item['title'][0],

item['author'][0],

item['time'][0],

item['crawl_time'][0],

item['imagelink'][0],

item['content'][0]

))

self.conn.commit()

这是我在scrapinghub上运行蜘蛛时遇到的错误。 熟悉此问题的任何人,请告诉我。

谢谢。

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值