Date: Tue, 09 Dec 2025 11:23:28 +0000 From: Po-Chuan Hsieh <sunpoet@FreeBSD.org> To: ports-committers@FreeBSD.org, dev-commits-ports-all@FreeBSD.org, dev-commits-ports-main@FreeBSD.org Subject: git: 8ad452903cff - main - databases/py-databricks-sql-connector: Add py-databricks-sql-connector 4.2.2 Message-ID: <693806b0.3ff3b.2ff16ab8@gitrepo.freebsd.org>
next in thread | raw e-mail | index | archive | help
The branch main has been updated by sunpoet: URL: https://cgit.FreeBSD.org/ports/commit/?id=8ad452903cffdb8808383cbf2b1a4bea5530fd07 commit 8ad452903cffdb8808383cbf2b1a4bea5530fd07 Author: Po-Chuan Hsieh <sunpoet@FreeBSD.org> AuthorDate: 2025-12-09 06:37:09 +0000 Commit: Po-Chuan Hsieh <sunpoet@FreeBSD.org> CommitDate: 2025-12-09 11:20:48 +0000 databases/py-databricks-sql-connector: Add py-databricks-sql-connector 4.2.2 The Databricks SQL Connector for Python allows you to develop Python applications that connect to Databricks clusters and SQL warehouses. It is a Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the Python DB API 2.0 specification. This connector uses Arrow as the data-exchange format, and supports APIs (e.g. fetchmany_arrow) to directly fetch Arrow tables. Arrow tables are wrapped in the ArrowQueue class to provide a natural API to get several rows at a time. PyArrow is required to enable this and use these APIs, you can install it via pip install pyarrow or pip install databricks-sql-connector[pyarrow]. The connector includes built-in support for HTTP/HTTPS proxy servers with multiple authentication methods including basic authentication and Kerberos/Negotiate authentication. See docs/proxy.md and examples/proxy_authentication.py for details. --- databases/Makefile | 1 + databases/py-databricks-sql-connector/Makefile | 41 ++++++++++++++++++++++ databases/py-databricks-sql-connector/distinfo | 3 ++ .../files/patch-pyproject.toml | 23 ++++++++++++ databases/py-databricks-sql-connector/pkg-descr | 15 ++++++++ 5 files changed, 83 insertions(+) diff --git a/databases/Makefile b/databases/Makefile index 0f3fa1c7271b..aa8ccbcd1a1f 100644 --- a/databases/Makefile +++ b/databases/Makefile @@ -767,6 +767,7 @@ SUBDIR += py-couchdb SUBDIR += py-cymysql SUBDIR += py-databases + SUBDIR += py-databricks-sql-connector SUBDIR += py-datafusion SUBDIR += py-dbf SUBDIR += py-dbt-core diff --git a/databases/py-databricks-sql-connector/Makefile b/databases/py-databricks-sql-connector/Makefile new file mode 100644 index 000000000000..ce42b64862bc --- /dev/null +++ b/databases/py-databricks-sql-connector/Makefile @@ -0,0 +1,41 @@ +PORTNAME= databricks-sql-connector +PORTVERSION= 4.2.2 +CATEGORIES= databases python +MASTER_SITES= PYPI +PKGNAMEPREFIX= ${PYTHON_PKGNAMEPREFIX} +DISTNAME= databricks_sql_connector-${PORTVERSION} + +MAINTAINER= sunpoet@FreeBSD.org +COMMENT= Databricks SQL Connector for Python +WWW= https://github.com/databricks/databricks-sql-python + +LICENSE= APACHE20 +LICENSE_FILE= ${WRKSRC}/LICENSE + +BUILD_DEPENDS= ${PYTHON_PKGNAMEPREFIX}poetry-core>=1.0.0:devel/py-poetry-core@${PY_FLAVOR} +RUN_DEPENDS= ${PYTHON_PKGNAMEPREFIX}lz4>=4.0.2<5:archivers/py-lz4@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}oauthlib>=3.1.0<4:security/py-oauthlib@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}openpyxl>=3.0.10<4:textproc/py-openpyxl@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}pandas>=2.2.3,1:math/py-pandas@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}pybreaker>=1.0.0<2:devel/py-pybreaker@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}pyjwt>=2.0.0<3:www/py-pyjwt@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}python-dateutil>=2.8.0<3:devel/py-python-dateutil@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}requests>=2.18.1<3:www/py-requests@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}requests-kerberos>=0.15.0<0.16:www/py-requests-kerberos@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}thrift>=0.16.0,1:devel/py-thrift@${PY_FLAVOR} \ + ${PYTHON_PKGNAMEPREFIX}urllib3>=1.26,1:net/py-urllib3@${PY_FLAVOR} + +USES= python shebangfix +USE_PYTHON= autoplist concurrent pep517 + +NO_ARCH= yes + +SHEBANG_FILES= src/databricks/sql/thrift_api/TCLIService/TCLIService-remote + +OPTIONS_DEFINE= PYARROW +OPTIONS_DEFAULT=PYARROW +PYARROW_DESC= Apache Arrow support + +PYARROW_RUN_DEPENDS= ${PYTHON_PKGNAMEPREFIX}pyarrow>=18.0.0:databases/py-pyarrow@${PY_FLAVOR} + +.include <bsd.port.mk> diff --git a/databases/py-databricks-sql-connector/distinfo b/databases/py-databricks-sql-connector/distinfo new file mode 100644 index 000000000000..67475348d141 --- /dev/null +++ b/databases/py-databricks-sql-connector/distinfo @@ -0,0 +1,3 @@ +TIMESTAMP = 1765032620 +SHA256 (databricks_sql_connector-4.2.2.tar.gz) = 409632a6adb93e849ecd5faaa2e98192e307c984e85cc7c696fb8e953acb7403 +SIZE (databricks_sql_connector-4.2.2.tar.gz) = 186206 diff --git a/databases/py-databricks-sql-connector/files/patch-pyproject.toml b/databases/py-databricks-sql-connector/files/patch-pyproject.toml new file mode 100644 index 000000000000..280850edd011 --- /dev/null +++ b/databases/py-databricks-sql-connector/files/patch-pyproject.toml @@ -0,0 +1,23 @@ +--- pyproject.toml.orig 1970-01-01 00:00:00 UTC ++++ pyproject.toml +@@ -10,10 +10,10 @@ python = "^3.8.0" + + [tool.poetry.dependencies] + python = "^3.8.0" +-thrift = ">=0.16.0,<0.21.0" ++thrift = ">=0.16.0" + pandas = [ +- { version = ">=1.2.5,<2.3.0", python = ">=3.8,<3.13" }, +- { version = ">=2.2.3,<2.3.0", python = ">=3.13" } ++ { version = ">=1.2.5", python = ">=3.8,<3.13" }, ++ { version = ">=2.2.3", python = ">=3.13" } + ] + lz4 = "^4.0.2" + requests = "^2.18.1" +@@ -88,4 +88,4 @@ skip_covered = false + skip_covered = false + + [tool.coverage.xml] +-output = "coverage.xml" +\ No newline at end of file ++output = "coverage.xml" diff --git a/databases/py-databricks-sql-connector/pkg-descr b/databases/py-databricks-sql-connector/pkg-descr new file mode 100644 index 000000000000..d8ce2f20df94 --- /dev/null +++ b/databases/py-databricks-sql-connector/pkg-descr @@ -0,0 +1,15 @@ +The Databricks SQL Connector for Python allows you to develop Python +applications that connect to Databricks clusters and SQL warehouses. It is a +Thrift-based client with no dependencies on ODBC or JDBC. It conforms to the +Python DB API 2.0 specification. + +This connector uses Arrow as the data-exchange format, and supports APIs (e.g. +fetchmany_arrow) to directly fetch Arrow tables. Arrow tables are wrapped in the +ArrowQueue class to provide a natural API to get several rows at a time. PyArrow +is required to enable this and use these APIs, you can install it via pip +install pyarrow or pip install databricks-sql-connector[pyarrow]. + +The connector includes built-in support for HTTP/HTTPS proxy servers with +multiple authentication methods including basic authentication and +Kerberos/Negotiate authentication. See docs/proxy.md and +examples/proxy_authentication.py for details.
Want to link to this message? Use this URL: <https://mail-archive.FreeBSD.org/cgi/mid.cgi?693806b0.3ff3b.2ff16ab8>
