libcurl下载数据不全的问题


今天终于在网上搜到英文答案了



curl not downloading full webpage

I am trying to run a simple program to start learning curl, but it doesn't get the whole page, merely ~20KB of it :/


#include <iostream>
#include <string>
#include <curl/curl.h>

static std::string buffer;

static int writer(char *data, size_t size, size_t nmemb, std::string *writerData) {
	if(writerData == NULL) return 0;

	writerData->append(data, size * nmemb);

	return size * nmemb;
}

int main(int argc, char **argv) {
	CURL *curl;
	CURLcode res;

	curl = curl_easy_init();
	if(curl) {
		curl_easy_setopt(curl, CURLOPT_URL, "http://www.neopets.com/games/pyramids/index.phtml");
		curl_easy_setopt(curl, CURLOPT_WRITEFUNCTION, writer);
		curl_easy_setopt(curl, CURLOPT_WRITEDATA, &buffer);

		res = curl_easy_perform(curl);
		std::cout << buffer;

		curl_easy_cleanup(curl);
	}

	return 0;
}

After a bit more investigation I have found it cuts out at the same 'place' on the webpage with different pages having different amounts of data before it.

Is it possible they put in a character that mucks it up on purpose?


答案在这里


After using the cli curl and getting the same thing, then trying wget to retrieve it I realised it redirects, so after adding in curl_easy_setopt(curl, CURLOPT_FOLLOWLOCATION, true); into my program it now gets the full 56KB.


当然啦linux 环境下true可以用1来代替。。


终于解决了,Mark一下啦!

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值