Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

爬取一会儿后出错 #4

Open
jixuan1989 opened this issue Apr 11, 2017 · 1 comment
Open

爬取一会儿后出错 #4

jixuan1989 opened this issue Apr 11, 2017 · 1 comment

Comments

@jixuan1989
Copy link

爬了两个城区后出现下列错误。同时网页打开后提示流量异常,要我做图片验证码,做完后网页浏览正常,但是爬取仍然如此。
此外,拿到的db是12K,是不是也是只有空文件? 另外这个文件使用sqlite查看吗?

/Library/Python/2.7/site-packages/beautifulsoup4-4.5.3-py2.7.egg/bs4/__init__.py:181: UserWarning: No parser was explicitly specified, so I'm using the best available HTML parser for this system ("html.parser"). This usually isn't a problem, but if you run this code on another system, or in a different virtual environment, it may use a different parser and behave differently.

The code that caused this warning is on line 371 of the file LianJiaSpider.py. To get rid of this warning, change code that looks like this:

 BeautifulSoup([your markup])

to this:

 BeautifulSoup([your markup], "html.parser")

Traceback (most recent call last):
  File "LianJiaSpider.py", line 371, in <module>
    do_xiaoqu_spider(db_xq,region)
  File "LianJiaSpider.py", line 187, in do_xiaoqu_spider
    d="d="+soup.find('div',{'class':'page-box house-lst-page-box'}).get('page-data')
AttributeError: 'NoneType' object has no attribute 'get'
@XuefengHuang
Copy link

可以试试我这个爬虫 数据会存在mysql。https://github.com/XuefengHuang/lianjia-scrawler

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants