缥缈止盈
为了记录在案,连接比连接速度快(我是认真的)。fputcsv甚至implode文件大小更小:// The data from Eternal Oblivion is an object, always$values = (array) fetchDataFromEternalOblivion($userId, $limit = 1000);
// ----- fputcsv (slow)// The code of @Alain Tiemblo is the best implementationob_start();$csv = fopen("php://output", 'w');
fputcsv($csv, array_keys(reset($values)));foreach ($values as $row) {
fputcsv($csv, $row);}fclose($csv);return ob_get_clean();// ----- implode (slow, but file size is smaller)
$csv = implode(",", array_keys(reset($values))) . PHP_EOL;foreach ($values as $row) {
$csv .= '"' . implode('","', $row) . '"' . PHP_EOL;}return $csv;// ----- concatenation (fast, file size is smaller)
// We can use one implode for the headers =D$csv = implode(",", array_keys(reset($values))) . PHP_EOL;$i = 1;
// This is less flexible, but we have more control over the formattingforeach ($values as $row) {
$csv .= '"' . $row['id'] . '",';
$csv .= '"' . $row['name'] . '",';
$csv .= '"' . date('d-m-Y', strtotime($row['date'])) . '",';
$csv .= '"' . ($row['pet_name'] ?: '-' ) . '",';
$csv .= PHP_EOL;}return $csv;这是从10行到数千行的几个报告优化的结论。这三个示例在1000行下工作良好,但当数据更大时就失败了。