What is the proper and fastest way to read Cassandra data into pandas? Now I use the following code but it's very slow...
import pandas as pd
from cassandra.cluster import Cluster
from cassandra.auth import PlainTextAuthProvider
from cassandra.query import dict_factory
auth_provider = PlainTextAuthProvider(username=CASSANDRA_USER, password=CASSANDRA_PASS)
cluster = Cluster(contact_points=[CASSANDRA_HOST], port=CASSANDRA_PORT,
auth_provider=auth_provider)
session = cluster.connect(CASSANDRA_DB)
session.row_factory = dict_factory
sql_query = "SELECT * FROM {}.{};".format(CASSANDRA_DB, CASSANDRA_TABLE)
df = pd.DataFrame()
for row in session.execute(sql_query):
df = df.append(pd.DataFrame(row, index=[0]))
df = df.reset_index(drop=True).fillna(pd.np.nan)
Reading 1000 rows takes 1 minute, and I have a "bit more"...
If I run the same query eg. in DBeaver, I get the whole results (~40k rows) within a minute.
Thank you!!!
解决方案
I got the answer at the official mailing list (it works perfectly):
Hi,
try to define your own pandas row factory:
def pandas_factory(colnames, rows):
return pd.DataFrame(rows, columns=colnames)
session.row_factory = pandas_factory
session.default_fetch_size = None
query = "SELECT ..."
rslt = session.execute(query, timeout=None)
df = rslt._current_rows
That's the way i do it - an it should be faster...
If you find a faster method - i'm interested in :)
Michael