Fast JDBC access in Python using pyarrow.jvm

While most databases are accessible via ODBC where we have an efficient way via turbodbc to turn results into a pandas.DataFrame, there are nowadays a lot of databases that either only come solely with a JDBC driver or the non-JDBC drivers are not part of free or open-source offering. To access these databases, you can use JayDeBeApi which is using JPype to call the JDBC driver. JPype starts a JVM inside the Python process and exposes the Java APIs as plain Python objects. While the convenience of use is really nice, this Java-Python bridge sadly comes at a high serialisation cost.

One of the main goals of Apache Arrow is to remove the serialisation cost of tabular data between different languages. A typical example where this already is successfully used is the Scala-Python bridge in PySpark. Here the communication between the JVM and Python is done via Py4J, a bridge between Python and JVM process. As there are multiple processes involved, the serialisation cost is reduced but communication and data copy between the ecosystems still exists.

In the following, we want to present an alternative approach to retrieve data via JDBC where the overhead between the JVM and pandas is kept as minimal as possible. This includes retrieving the whole data on the JVM side, transforming it to an Arrow Record Batch and then passing the memory pointer to that Record Batch over to Python. The important detail here is that we only pass a pointer to the data to Python, not the data itself.

Benchmark setup

In this benchmark, we will use Apache Drill as the database using its official JDBC driver. For the data, we will use the January 2017 Yellow Cab New York City trip data converted to Parquet. We start Drill in its embedded mode using ./bin/drill-embedded. There we can already peak into the data using

SELECT * FROM dfs.`/…/data/yellow_tripdata_2016-01.parquet` LIMIT 1

As the main aspect here is to show how to access databases using JDBC in Python, we will use JayDeBeApi now to connect to this running Drill instance. Therefore we start a JVM with jpype and then connect using jaydebeapi and the drill-jdbc-all-1.16.0.jar JAR to the database. For the JDBC connections, it is important that we have either a classpath with all Java dependencies or as in this case, a JAR that already bundles all dependencies. Finally, we execute the query and use the result to construct a pandas.DataFrame.

import jaydebeapi
import jpype
import os

classpath = os.path.join(os.getcwd(), "apache-drill-1.16.0/jars/jdbc-driver/drill-jdbc-all-1.16.0.jar")
jpype.startJVM(jpype.getDefaultJVMPath(), f"-Djava.class.path={classpath}")
conn = jaydebeapi.connect('org.apache.drill.jdbc.Driver', 'jdbc:drill:drillbit=127.0.0.1')
cursor = conn.cursor()

query = """
    SELECT * 
    FROM dfs.`/…/data/yellow_tripdata_2016-01.parquet`
    LIMIT 1
"""

cursor.execute(query)
columns = [c[0] for c in cursor.description]
data = cursor.fetchall()
df = pd.DataFrame(data, columns=columns)

To measure the performance, we have tried initially to run the full query to measure the retrieval performance but as this didn’t finish after 10min, we reverted to running the SELECT query with different LIMIT sizes. This lead to the following response times on my laptop (mean ± std. dev. of 7 runs):

LIMIT nTime
100007.11 s ± 58.6 ms
 1000001min 9s ± 1.07 s
100000011min 31s ± 4.76 s

Out of curiosity, we have retrieved the full result set once and this came down to an overall time of 2h 42min 59s on a warm JVM.

pyarrow.jvm and combined jar

As the above times were quite frustrating, we have high hopes that using Apache Arrow could bring a decent speedup for this operation. To use Apache Arrow Java and the Drill ODBC driver together, we need to bundle both together on the JVM classpath. The simplest way to do this is generate a new JAR that includes all dependencies using a build tool like Apache Maven. With the following pom.xml you get a fat JAR using mvn assembly:single. It is important here that your Apache Arrow Java version matches the pyarrowversion, in this case here, both are at 0.15.1. It might still work when they differ but as there is limited API stability between the two implementations, this could otherwise lead to crashes.

<project xmlns="http://maven.apache.org/POM/4.0.0" xmlns:xsi="http://www.w3.org/2001/XMLSchema-instance"
         xsi:schemaLocation="http://maven.apache.org/POM/4.0.0 http://maven.apache.org/xsd/maven-4.0.0.xsd">
    <modelVersion>4.0.0</modelVersion>

    <groupId>com.uwekorn</groupId>
    <artifactId>drill-odbc</artifactId>
    <version>0.1-SNAPSHOT</version>
    <packaging>jar</packaging>

    <name>drill-odbc</name>

    <properties>
        <project.build.sourceEncoding>UTF-8</project.build.sourceEncoding>
    </properties>

    <dependencies>
        <dependency>
            <groupId>org.apache.arrow</groupId>
            <artifactId>arrow-jdbc</artifactId>
            <version>0.15.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.arrow</groupId>
            <artifactId>arrow-memory</artifactId>
            <version>0.15.1</version>
        </dependency>
        <dependency>
            <groupId>org.apache.drill.exec</groupId>
            <artifactId>drill-jdbc-all</artifactId>
            <version>1.16.0</version>
        </dependency>

    </dependencies>
    
     <build>
      <plugins>
        <plugin>
          <artifactId>maven-assembly-plugin</artifactId>
          <configuration>
            <archive>
              <manifest>
                <mainClass>com.uwekorn.Main</mainClass>
              </manifest>
            </archive>
            <descriptorRefs>
              <descriptorRef>jar-with-dependencies</descriptorRef>
            </descriptorRefs>
          </configuration>
        </plugin>
          <plugin>
              <groupId>org.apache.maven.plugins</groupId>
              <artifactId>maven-compiler-plugin</artifactId>
              <version>3.8.0</version>
              <configuration>
                  <source>11</source>
                  <target>11</target>
              </configuration>
          </plugin>
      </plugins>
    </build>
</project>

After the JAR has been built, we now want to start the JVM with it loaded. Sadly, jpype has the limitation that you need to restart your Python process when you want to restart the JVM with different parameters. We thus adjust the JVM startup command to:

classpath = os.path.join(os.getcwd(), "all-jar/target/drill-odbc-0.1-SNAPSHOT-jar-with-dependencies.jar")

To use Apache Arrow Java to retrieve the result, we need to instantiate a RootAllocator that is used in Arrow Java to allocate the off-heap memory and also construct a DriverManager instance to connect to the database.

ra = jpype.JPackage("org").apache.arrow.memory.RootAllocator(sys.maxsize)
dm = jpype.JPackage("java").sql.DriverManager
connection = dm.getConnection("jdbc:drill:drillbit=127.0.0.1")

Once this is setup, we can use the Java method sqlToArrow to query a database using JDBC, retrieve the result and convert it to an Arrow RecordBatch on the Java side. With the helper pyarrow.jvm.record_batch we can take the jpypereference to the Java object, extract the memory address of the RecordBatch and create a matching Python pyarrow.RecordBatch object that points to the same memory.

batch = jpype.JPackage("org").apache.arrow.adapter.jdbc.JdbcToArrow.sqlToArrow(
    connection,
    query,
    ra
)

df = pyarrow.jvm.record_batch(batch).to_pandas()

Using these commands, we can now execute the same queries again and compare them to the jaydebeapi times:

LIMIT nTime (JayDeBeApi)Time (pyarrow.jvm)Speedup
100007.11 s ± 58.6 ms165 ms ± 5.86 ms43x
 1000001min 9s ± 1.07 s538 ms ± 29.6 ms128x
100000011min 31s ± 4.76 s5.05 s ± 596 ms136x

With the pyarrow.jvm approach, we not get similar times to turbodbc.fetchallarrow() on other databases that come with an open ODBC driver. This also leads to the retrieval of the whole being a more sane 50.2 sinstead of the hours-long wait with jaydebeapi.

Conclusion

By moving the row-to-columnar conversion to the JVM and avoiding to create intermediate Python objects before creating a pandas.DataFrame again, we can speedup the retrieval times for JDBC drivers in Python by over *100x*. As a user, you need to change your calls to jaydebeapi to the Apache Arrow Java API andpyarrow.jvm`. Additionally, you will have to take care that the Apache Arrow Java and the JDBC drivers are on the Java classpath. By using a common Java build tool, this can be achieved by simply declaring them as dependencies of a dummy package.

  • 0
    点赞
  • 0
    收藏
    觉得还不错? 一键收藏
  • 0
    评论
在现有省、市港口信息化系统进行有效整合基础上,借鉴新 一代的感知-传输-应用技术体系,实现对码头、船舶、货物、重 大危险源、危险货物装卸过程、航管航运等管理要素的全面感知、 有效传输和按需定制服务,为行政管理人员和相关单位及人员提 供高效的管理辅助,并为公众提供便捷、实时的水运信息服务。 建立信息整合、交换和共享机制,建立健全信息化管理支撑 体系,以及相关标准规范和安全保障体系;按照“绿色循环低碳” 交通的要求,搭建高效、弹性、高可扩展性的基于虚拟技术的信 息基础设施,支撑信息平台低成本运行,实现电子政务建设和服务模式的转变。 实现以感知港口、感知船舶、感知货物为手段,以港航智能 分析、科学决策、高效服务为目的和核心理念,构建“智慧港口”的发展体系。 结合“智慧港口”相关业务工作特点及信息化现状的实际情况,本项目具体建设目标为: 一张图(即GIS 地理信息服务平台) 在建设岸线、港口、港区、码头、泊位等港口主要基础资源图层上,建设GIS 地理信息服务平台,在此基础上依次接入和叠加规划建设、经营、安全、航管等相关业务应用专题数据,并叠 加动态数据,如 AIS/GPS/移动平台数据,逐步建成航运管理处 "一张图"。系统支持扩展框架,方便未来更多应用资源的逐步整合。 现场执法监管系统 基于港口(航管)执法基地建设规划,依托统一的执法区域 管理和数字化监控平台,通过加强对辖区内的监控,结合移动平 台,形成完整的多维路径和信息追踪,真正做到问题能发现、事态能控制、突发问题能解决。 运行监测和辅助决策系统 对区域港口与航运业务日常所需填报及监测的数据经过科 学归纳及分析,采用统一平台,消除重复的填报数据,进行企业 输入和自动录入,并进行系统智能判断,避免填入错误的数据, 输入的数据经过智能组合,自动生成各业务部门所需的数据报 表,包括字段、格式,都可以根据需要进行定制,同时满足扩展 性需要,当有新的业务监测数据表需要产生时,系统将分析新的 需求,将所需字段融合进入日常监测和决策辅助平台的统一平台中,并生成新的所需业务数据监测及决策表。 综合指挥调度系统 建设以港航应急指挥中心为枢纽,以各级管理部门和经营港 口企业为节点,快速调度、信息共享的通信网络,满足应急处置中所需要的信息采集、指挥调度和过程监控等通信保障任务。 设计思路 根据项目的建设目标和“智慧港口”信息化平台的总体框架、 设计思路、建设内容及保障措施,围绕业务协同、信息共享,充 分考虑各航运(港政)管理处内部管理的需求,平台采用“全面 整合、重点补充、突出共享、逐步完善”策略,加强重点区域或 运输通道交通基础设施、运载装备、运行环境的监测监控,完善 运行协调、应急处置通信手段,促进跨区域、跨部门信息共享和业务协同。 以“统筹协调、综合监管”为目标,以提供综合、动态、实 时、准确、实用的安全畅通和应急数据共享为核心,围绕“保畅通、抓安全、促应急"等实际需求来建设智慧港口信息化平台。 系统充分整合和利用航运管理处现有相关信息资源,以地理 信息技术、网络视频技术、互联网技术、移动通信技术、云计算 技术为支撑,结合航运管理处专网与行业数据交换平台,构建航 运管理处与各部门之间智慧、畅通、安全、高效、绿色低碳的智 慧港口信息化平台。 系统充分考虑航运管理处安全法规及安全职责今后的变化 与发展趋势,应用目前主流的、成熟的应用技术,内联外引,优势互补,使系统建设具备良好的开放性、扩展性、可维护性。

“相关推荐”对你有帮助么?

  • 非常没帮助
  • 没帮助
  • 一般
  • 有帮助
  • 非常有帮助
提交
评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值