SimbaODBC for BigQuery

简介

Google 与 Simba 展开合作,推出利用 BigQuery 的GoogleSQL 强大功能的 ODBC 和 JDBC 驱动程序。

JDBC 和 ODBC 驱动程序旨在帮助用户将 BigQuery 的强大功能与现有工具和基础架构进行结合。BigQuery 的一些功能(包括高性能存储集成和预留管理)只能通过 BigQuery API 提供。 这些驱动程序只能与 BigQuery 搭配使用,不能与任何其他产品或服务搭配使用。您可以使用这些驱动程序,而无需任何额外的许可要求,但不能将驱动程序重新分发为应用的一部分。

当前 ODBC 驱动程序

ODBC 版本 3.0.5.1011

ODBC 版本 2.5.2.1004

我们建议您升级到 3.x 版本。2.5.x 版本仍支持 bug 修复和关键安全更新,但新的 BigQuery 功能只会添加到 3.x 版本中。

注意:为了符合 Google 带外 (OOB) 授权流程弃用要求,ODBC 版本 2.5.2 及更高版本引入了一些更改,您可能需要重新验证和/或重新创建连接字符串。这是因为用户账号流程已更改,不再支持已弃用的带外流程。连接对话框现在会自动检索刷新令牌,从而无需手动复制并粘贴确认代码。Linux 和 Mac 刷新令牌脚本已更新为 Python。如需了解详情,请参阅安装和配置指南。

先前的 ODBC 版本

当前 JDBC 驱动程序

JDBC 版本 1.5.4.1008

注意:为了符合 Google 带外 (OOB) 授权流程弃用要求,JDBC 版本 1.3.2 引入了一些更改,您可能需要重新验证和/或重新创建连接字符串。这是因为用户账号流程已更改,不再支持已弃用的带外流程。如需了解详情,请参阅安装和配置指南。

先前的 JDBC 版本

已知问题和常见问题解答

我可以使用这些驱动程序在 BigQuery 和我的现有环境之间提取或导出数据吗?

这些驱动程序利用 BigQuery 的查询接口,不提供利用 BigQuery 的大规模提取机制的功能或导出功能。

虽然您可以使用 DML 发出少量 INSERT 请求,但它受 DML 的限制

驱动程序如何处理 BigQuery 的嵌套和重复的数据架构?

嵌套和重复的数据(在 GoogleSQL 中也称为 STRUCTS 和 ARRAYS)表示为这些类型在 BigQuery API 中的 JSON 输出,因为 ODBC 数据模型没有适当的方式来表示此类数据。虽然您可以运行操控这些类型的查询,但如果查询的输出架构具有复合类型,则驱动程序将呈现以 JSON 格式编码的数据。

注意:您可以定义逻辑视图来以较简单的方式表示此类数据,例如展平重复的值或选择记录中的个别字段。在这些情况下,值可直接使用,原因是它们并非使用 JSON 表示法呈现。

这些驱动程序支持参数化查询吗?

是的,这些驱动程序支持位置参数化。请注意,执行前准备查询可提供验证信息,但不会影响已执行查询的性能。

驱动程序是否支持 SQL 查询前缀?

虽然 BigQuery 支持使用查询前缀来切换旧版 SQL 和 GoogleSQL 方言,但这些驱动程序并不支持。这些驱动程序会保持与所用 SQL 模式相关的特定状态,并在创建连接时明确设置该选项。由于在创建连接时固定了 SQL 模式,因此这些驱动程序不支持使用查询前缀切换 SQL 方言。

我如何获得这些驱动程序的支持?

请参阅我们的支持页面中的支持选项。

通过驱动程序查询 BigQuery 时如何计费?

您可以免费下载驱动程序。使用驱动程序运行的查询将根据驱动程序的配置方式计费:

  • 查询价格默认适用于来自驱动程序的所有查询。查询价格是在未将驱动程序配置为支持大型结果集时适用的唯一价格。
  • 当驱动程序配置为将大型结果集写入目标表时,除了查询价格之外,系统还会应用存储价格。数据会存储 24 小时,而表结果将产生 24 小时的存储费用。
  • 使用这些驱动程序调用 Storage API 时,系统会应用 Storage API 价格。此价格适用于从查询结果中读取的数据,而不适用于查询扫描的数据。Storage API 价格仅适用于大型结果集。

Release Notes

==============================================================================
Magnitude Simba Google BigQuery ODBC Data Connector Release Notes
==============================================================================

The release notes provide details of enhancements, features, known issues, and
workflow changes in Simba Google BigQuery ODBC Connector 3.0.5, as well as the
version history. 


3.0.5 ========================================================================

Released 2024-03-22

Enhancements & New Features

 * [GAUSS-1636] Application Default Credentials support

   You can now authenticate your connection with application default 
   credentials. To do this, select Application Default Credentials from the 
   OAuth Mechanism drop-down list (set the OAuth Mechanism property to 
   Application Default Credentials). For more information, see the 
   Installation and Configuration Guide.

 * [GAUSS-1758][GAUSS-1763][GAUSS-1764] Updated RANGE data type support

   The connector now supports RANGE data types for read operations. For more 
   information, see the Installation and Configuration Guide and Google 
   BigQuery documentation: https://cloud.google.com/bigquery/docs/reference/standard-sql/data-types#range_type.

 * [GAUSS-1768] Project ID support

   The DefaultDataset connection property now supports single dataset name and
   multi-part names that include project IDs. 

 * [GAUSS-1783] Request ID support

   The connector has improved retry logic by setting request id in jobs.query.

 * [GAUSS-1784] Updated DllA directories location
    
   The libcurl.dll, LibCurl32.DllA.manifest, and LibCurl64.DllA.manifest files 
   are now located under the lib folder.
 
 * [GAUSS-1792] OS version support

   The connector now displays the OS version in the user agent string.
 
 * Updated third-party libraries

   The connector now uses the following third-party libraries:
   - Avro 1.11.3 (previously 1.11.1)
   - Expat 2.6.0 (previously 2.5.0)
   - libcURL 8.6.0 (previously 8.4.0)
   - OpenSSL 3.0.13 (previously 3.0.12) 
   - Zlib 1.3.1 (previously 1.2.13)

 
Resolved Issues
The following issue has been resolved in Simba Google BigQuery ODBC Connector 
3.0.5.

 * [GAUSS-1791] When retrieving data from DATETIME columns, the connector  
   retains milliseconds from the previous row. 


Known Issues
The following are known issues that you may encounter due to limitations in
the data source, the connector, or an application.

 * The connector does not support parameterized types for Resultset and 
   Parameter metadata.

   This is a limitation of the Google BigQuery server. 

 * The connector does not support parameters in the exception block.
 
   This is a limitation of the Google BigQuery server discovered on Mar 2021.
   
 * On macOS or Linux platforms, when the connector converts SQL_DOUBLE data to 
   SQL_C_CHAR or SQL_C_WCHAR, data which is small or large enough to require 
   representation in scientific notation may prepend a 0 to the exponent. 

   This is a limitation of Google BigQuery. For a list of BigQuery data types 
   that the connector maps to the SQL_DOUBLE ODBC type, see the Installation 
   and Configuration Guide.

 * When casting data, you must specify the data type according to Google 
   BigQuery standards.

   When casting data to a specific data type, you must use the corresponding 
   data type name shown in the "Casting" section of the Query Reference: 
   https://cloud.google.com/bigquery/sql-reference/functions-and-operators#cas
   ting 

   For example, to cast the "salary" column to the INTEGER type, you must 
   specify INT64 instead of INTEGER: 

      SELECT position, CAST(salary AS INT64) from Employee

 * When using the Standard SQL dialect, the connector's ODBC escape 
   functionality is subject to the following limitations:
   
   - Standard SQL does not support the seed in the RAND([seed]) scalar
     function. As a result, the connector maps RAND() and RAND(6) to RAND().

   - For the following scalar functions, BigQuery only returns values in UTC,
     but ODBC expects the values in local time:
     - CURDATE()
     - CURRENT_DATE()
     - CURRENT_TIME[(TIME_PRECISION)]
     - CURRENT_TIMESTAMP[(TIME_PRECISION)]
     - CURTIME()
     - NOW()

   - Time precision values are not supported for the 
     CURRENT_TIME[(TIME_PRECISION)] and CURRENT_TIMESTAMP[(TIME_PRECISION)]
     scalar functions.

   - TIME data types are not supported for the following scalar functions:
     - EXTRACT(interval FROM datetime)
     - TIMESTAMPADD(interval,integer_exp,timestamp_exp
     - TIMESTAMPDIFF(interval,timestamp_exp1,timestamp_exp2)
     For TIMESTAMPADD and TIMESTAMPDIFF, only the TIMESTAMP and DATE data 
     types are supported.

   - When calling the TIMESTAMPADD() scalar function to work with DAY, WEEK, 
     MONTH, QUARTER, or YEAR intervals, the connector escapes the function and 
     calls DATE_ADD() instead. DATE_ADD() only supports DATE types, so time
     information is lost if the function is called on TIMESTAMP data.

   - When calling the TIMESTAMPDIFF() scalar function to work with DAY, MONTH, 
     QUARTER, or YEAR intervals, the connector escapes the function and calls 
     DATE_DIFF() instead. DATE_DIFF() only supports DATE types, so time 
     information is lost if the function is called on TIMESTAMP data.

   - For the BIT_LENGTH scalar function, only the STRING and BYTES data types 
     are supported. This behavior aligns with the SQL-92 specification, but 
     not the ODBC specification.

 * When using the Legacy SQL dialect, the connector's ODBC escape 
   functionality is subject to the following limitations:

   - For the following scalar functions, BigQuery only returns values in UTC,
     but ODBC expects the values in local time:
     - CURDATE()
     - CURRENT_DATE()
     - CURRENT_TIME[(TIME_PRECISION)]
     - CURRENT_TIMESTAMP[(TIME_PRECISION)]
     - CURTIME()

   - Time precision values are not supported for the 
     CURRENT_TIME[(TIME_PRECISION)] and CURRENT_TIMESTAMP[(TIME_PRECISION)]
     scalar functions. 

   - For the following scalar functions, TIME data types are not supported.
     Only the TIMESTAMP and DATE data types are supported.
     - TIMESTAMPADD(interval,integer_exp,timestamp_exp
     - TIMESTAMPDIFF(interval,timestamp_exp1,timestamp_exp2)


Workflow Changes =============================================================

The following changes may disrupt established workflows for the connector.


3.0.5 ------------------------------------------------------------------------

 * [GAUSS-1784] Removed DllA directories 
    
   Beginning with this release, on Windows, the LibCurl64.DllA and 
   LibCurl32.DllA directories for 64 bit and 32 bit packages have been 
   removed. 


3.0.4 ------------------------------------------------------------------------

 * [GAUSS-1767] Updated IgnoreTransactions property

   The documentation for IgnoreTransactions property has been updated. For more 
   information, see the Installation and Configuration Guide.


3.0.3 ------------------------------------------------------------------------

 * [GAUSS-1338] Updated High-Throughput API section 

   Your authentication requires the devstorage.read_only scope at minimum to 
   use the High-Throughput API. For more information, see the Installation and
   Configuration Guide.


3.0.2 ------------------------------------------------------------------------

 * [GAUSS-1685] Upgraded EnableSession property support

   When using transactions, the EnableSession property is now required.

 * [GAUSS-1706] Updated Installation and Configuration section for Windows

   The Installation and Configuration section has been updated to include 
   crls.pki.goog as a required endpoint to be whitelisted in proxies. For
   more information, see the Installation and Configuration Guide.


3.0.0 ------------------------------------------------------------------------

 * [GAUSS-1593] Removed support for multiple operating systems

   Beginning with this release, the connector no longer supports the following
   operating systems: 
   - Ubuntu 18.04
   - Windows 8.1

   For a list of supported operating systems, see the Installation and 
   Configuration Guide.

 * [GAUSS-1595] Updated HTAPI support

   The following changes have been made to the HTAPI feature:
   - The Minimum Table Size for HTAPI field (HTAPI_MinResultsSize property) 
     has been removed.
   - The Ratio of Results to Rows Per Block field (HTAPI_MinActivationRatio
     property) has been renamed to Activation Threshold for High-Throughput 
     API (HTAPI_ActivationThreshold). The HTAPI_MinActivationRatio property
     has been deprecated and is an alias. 
   - The Enable HTAPI for Large Results Dataset field (EnableHTAPI property)
     has been renamed Allow High-Throughput API for Large Results queries 
     (AllowHtapiForLargeResults). The EnableHTAPI property has been deprecated
     and is an alias. 
   - In the features section, the High-Throughput API section has been updated
     with new information.

 * [GAUSS-1604] Removed support for macOS universal bitness

   Beginning with this release, the connector no longer supports universal
   bitness for macOS. Support for macOS versions 10.14 (32-bit) and 10.15
   (32-bit) has been removed. For a list of supported macOS versions, see the 
   Installation and Configuration Guide.

 * [GAUSS-1624] Updated Large Result Set support

   The following changes have been made to the Large Result Set feature:
   - In the Advanced options configuration, Temporary Table Expiration Time
     has been renamed to Default temp table expiration time (ms). 
   - In the features section, the Large Result Set Support section has been 
     updated with new information.

 * [GAUSS-1625] Updated ClientId and ClientSecret default values

   The connector now uses package-specific default values for the ClientId and
   ClientSecret properties. It is recommeneded to use your own Client ID and
   Client Secret. For more information, see the Installation and Configuration
   Guide.

 * [GAUSS-1645] Removed support for P12 kefiles 

   Beginning with this release, the connector no longer supports P12 keyfiles.
   Users relying on service account will have to use JSON keyfiles instead.
   The P12CustomPwd property used for supporting the P12 keyfile has also been
   deprecated.


2.6.0 ------------------------------------------------------------------------

 * [GAUSS-1583] Updated ClientID and ClientSecret properties

   The connector now uses Simba client details as the default values for the 
   ClientId and ClientSecret properties. The readers are recommended to use
   their own client ID and client secret. For more information, see the 
   Installation and Configuration Guide.

 * [GAUSS-1541] Upgraded Google user account authentication

   The dialog now automatically retrieves a refresh token without a manual
   copy-and-paste of the confirmation code. For more information, see the 
   Installation and Configuration Guide.

 * [GAUSS-1543] Upgraded HTAPI property

   The following changes have been applied to the Advanced options 
   configuration:
   - The Enable HTAPI checkbox has been moved from the High Throughput API 
   options section to the Large Results options section.
   - The connector now uses Enable HTAPI for large result dataset that exceed
   the activation ratio and minimum query results size for HTAPI. To do this, 
   select the Enable High-Throughput API for Large Result Dataset checkbox 
   (set the EnableHTAPI property to 1).
   - The Minimum Query Results Size for HTAPI and Ratio of Results to Rows Per
   Block properties are now editable at all times. For more information, see 
   the Installation and Configuration Guide.


2.5.2 ------------------------------------------------------------------------

 * [GAUSS-1626] Removed redundant connection properties

   Auth_Client_ID and Auth_Client_SECRET connection properties have been 
   deprecated.
	
 * [GAUSS-1543] Upgraded HTAPI property

   The following changes have been applied to the Advanced options 
   configuration:
   - The Enable HTAPI checkbox has been moved from the High Throughput API 
   options section to the Large Results options section.
   - The connector now uses Enable HTAPI for large result dataset that exceed
   the activation ratio and minimum query results size for HTAPI. To do this, 
   select the Enable High-Throughput API for Large Result Dataset checkbox 
   (set the EnableHTAPI property to 1).
   - The Minimum Query Results Size for HTAPI and Ratio of Results to Rows Per
   Block properties are now editable at all times. For more information, see 
   the Installation and Configuration Guide.


2.5.0 ------------------------------------------------------------------------

 * [GAUSS-1508] Updated authentication interface

   On Windows, the Connector DSN Setup dialog box has been updated. For more 
   information, see the Installation and Configuration Guide.


2.4.5 -----------------------------------------------------------------------
 
 * [GAUSS-1434] Updated MaxThreads property

   The default value of the MaxThreads property is now 8. Previously, the 
   default value was 16. For more information, see the Installation and 
   Configuration Guide.


2.3.5 -----------------------------------------------------------------------
 
 * [GAUSS-1246] Removed support for macOS earlier than 10.14

   Beginning with this release, the connector no longer supports macOS
   versions earlier than 10.14. For a list of supported macOS versions, see 
   the Installation and Configuration Guide.


2.2.4 ------------------------------------------------------------------------

 * [GAUSS-980] Removed support for the Visual C++ Redistributable for Visual
   Studio 2013
  
   Beginning with this release, the driver no longer supports this version
   of the dependency, and requires Visual C++ Redistributable for Visual
   Studio 2015 instead.
   

2.2.2 ------------------------------------------------------------------------

 * [GAUSS-875] New service endpoints

   The driver now uses a new set of service endpoints to connect to the 
   Google BigQuery API. The previous service endpoints have been deprecated. 
   For a list of the new endpoints, see the "Service Endpoints" section of 
   the Installation and Configuration Guide. 

 * [GAUSS-897] Precedence for default large result dataset

   If the Use Default _bqodbc_temp_tables Large Results Dataset check box is 
   selected (the UseDefaultLargeResultsDataset property is set to 1) and a 
   dataset is specified in the Dataset Name For Large Result Sets field (the 
   LargeResultsDataSetID property), the driver now uses the default 
   _bqodbc_temp_tables dataset. For more information, see the Installation 
   and Configuration Guide.


2.2.0 ------------------------------------------------------------------------

 * Linux support changes

   Beginning with this release, the Linux version of the driver now requires 
   glibc 2.17 or later to be installed on the target machine.
   
   As a result, the driver no longer supports CentOS 6 or RedHat Enterprise 
   Linux (RHEL) 6. Only CentOS 7, RHEL 7, and SUSE Linux Enterprise Server 
   (SLES) 11 and 12 are supported.


2.1.22 -----------------------------------------------------------------------

 * [GAUSS-653] Updated large result set behavior

   The driver's behavior for handling large result sets with legacy SQL has
   been changed. When the driver sends a query, it checks whether the "Allow
   Large Results" option is enabled and if there is a dataset name specified.
   If the option is enabled, it requests a temporary large result set for
   your data. This data storage has cost implications for your Big Query
   account, consult the Big Query service documentation for details.


2.1.14 -----------------------------------------------------------------------

 * Minimum TLS Version

   Beginning with this release, the driver requires a minimum version of TLS 
   for encrypting the data store connection. By default, the driver requires 
   TLS version 1.2. This requirement may cause existing DSNs and connection 
   strings to stop working, if they are used to connect to data stores that 
   use a TLS version earlier than 1.2.

   To resolve this, in your DSN or connection string, set the Minimum TLS 
   option (the Min_TLS property) to the appropriate version of TLS for your 
   server. For more information, see the Installation and Configuration Guide.

 * Large result set handling
 
   If you have a default destination set for large datasets but have not
   enabled the Allow Large Result Sets option (the AllowLargeResults property) 
   the driver reports an error.
   
   To resolve this, enable the Allow Large Result Sets option (the 
   AllowLargeResults property).


Version History ==============================================================

3.0.4 ------------------------------------------------------------------------

Released 2023-12-05

Enhancements & New Features

 * [GAUSS-1437] Collation support

   The connector now returns case insensitive string columns information in 
   the resultset metadata. For more information about this feature, see: 
   https://cloud.google.com/bigquery/docs/reference/standard-sql/collation-concepts.

 * [GAUSS-1671] Updated documentation for PSC property

   The documentation for PSC property has been updated. For more information, 
   see the Installation and Configuration Guide.

 
Resolved Issues
The following issues have been resolved in Simba Google BigQuery ODBC 
Connector 3.0.4.

 * [GAUSS-1725] When the KMS Key is not properly set for all the queries, the 
   connector returns an error.

 * [GAUSS-1755] The connector does not retry for HTTP 502 BAD_GATEWAY errors.
   
 * [GAUSS-1760] When a transaction with a specified location initiates, the 
   connector does not add the location in the request.


3.0.3 ------------------------------------------------------------------------

Released 2023-10-31

Enhancements & New Features

 * [GAUSS-1751][GAUSS-1752] Updated third-party libraries

   The connector now uses the following third-party libraries:
   - LibCurl 8.4.0 (previously 8.1.2)
   - OpenSSL 3.0.11 (previously 3.0.9)
   
 
Resolved Issues
The following issues have been resolved in Simba Google BigQuery ODBC 
Connector 3.0.3.

 * [GAUSS-1591] When releasing a descriptor handle, the connector terminates
   unexpectedly.

 * The connector computes the value returned for SQL_DESC_OCTET_LENGTH from 
   the IRD (and thus also SQLColAttribute) for character types by using the 
   maximum size of a codepoint, instead of the size of a code unit. For both 
   wide and non-wide types, the column size was previously used directly for 
   non-wide types as the code unit size was assumed to be 1.

 * When inserting a maximum double-value in the number converter, the 
   connector returns an error.


3.0.2 ------------------------------------------------------------------------

Released 2023-07-20

Enhancements & New Features

 * [GAUSS-1412] Updated macOS support
   
   On macOS, the connector is now a Universal driver that natively supports 
   Apple Silicon. For security best practices, it is suggested to keep both
   the connector and OS updated. 

 * [GAUSS-1676] Support for SessionLocation property

   The connector can now create a session with the first query in your desired  
   location. To do this, select the Session Location field (set the 
   SessionLocation property to the desired location). For more information, 
   see the Installation and Configuration Guide.
 
 * [GAUSS-1662] Upgraded Windows compiler

   The connector now uses Visual Studio 2022. Previously, the connector used  
   Visual Studio 2015. For a list of supported Visual Studio versions, see the
   Installation and Configuration Guide.

 * [GAUSS-1661][GAUSS-1663][GAUSS-1686][GAUSS-1695] Updated third-party 
   libraries

   The connector now uses the following third-party libraries:
   - Boost 1.82.0 (previously 1.66.0)
   - libcURL 8.1.2 (previously 7.88.1)
   - OpenSSL 3.0.9 (previously 3.0.8) 
   - Grpc 1.46.7 (previously 1.37.1)
    

Resolved Issues
The following issues have been resolved in Simba Google BigQuery ODBC 
Connector 3.0.2.

 * [GAUSS-1680] The connector ignores the HTAPI_ActivationThreshold property.

 * [GAUSS-1708] When LargeResultsDatasetId is not specified, the connector
   ignores the given CMEK key. For more information on KMSKeyName, see the
   Installation and Configuration Guide.


3.0.0 ------------------------------------------------------------------------

Released 2023-04-14

This release contains workflow changes, support deprecations and removals, 
please review Workflow Changes section below.

Enhancements & New Features

 * [GAUSS-1641][GAUSS-1647] Updated Refresh Token field behavior

   In the DSN configuration dialog, the Refresh Token field is now hidden with
   bullets. To generate refresh tokens that you can copy or paste into 
   connection strings, use the get_refresh_token.py script located under the
   Tools directory. The DSN configuration dialog still generates a valid DSN 
   in the Windows registry. For more information, see the Installation and 
   Configuration Guide.

 * [GAUSS-1593] Support for multiple operating systems

   The connector now supports the following operating systems:
   - RedHat Enterprise Linux (RHEL) 9
   - Ubuntu 22.04

   For a list of supported operating systems, see the Installation and 
   Configuration Guide.

 * [GAUSS-1654][GAUSS-1588] Updated third-party libraries

   The connector now uses the following third-party libraries:
   - Expat 2.5.0 (previously 2.4.6)   
   - ICU 71.1(previously 58.3)
   - LibCurl 7.88.1 (previously 7.84.0)
   - OpenSSL 3.0.8 (previously 1.1.1s) 
   - Zlib 1.2.13 (previously 1.2.11) 


Resolved Issues
The following issues have been resolved in Simba Google BigQuery ODBC 
Connector 3.0.0.

 * [GAUSS-1609] The connector displays an incorrect error message for 
   decryption.

 * [GAUSS-1644] For catalog functions other than SQLStatistics, the connector 
   incorrectly uses SEQ_IN_INDEX for the ordinal position column.

 

评论
添加红包

请填写红包祝福语或标题

红包个数最小为10个

红包金额最低5元

当前余额3.43前往充值 >
需支付:10.00
成就一亿技术人!
领取后你会自动成为博主和红包主的粉丝 规则
hope_wisdom
发出的红包
实付
使用余额支付
点击重新获取
扫码支付
钱包余额 0

抵扣说明:

1.余额是钱包充值的虚拟货币,按照1:1的比例进行支付金额的抵扣。
2.余额无法直接购买下载,可以购买VIP、付费专栏及课程。

余额充值