Releases: databricks/databricks-sql-nodejs
1.9.0
What's Changed
- Fix: Fix the type check in polyfills.ts by @kravets-levko in #254
- Allow any number type by @kravets-levko in #255
- Support iterable interface for IOperation by @kravets-levko in #252
- Support streaming query results via Node.js streams by @kravets-levko in #262
- Add custom auth headers into cloud fetch request by @jackyhu-db in #267
- Support OAuth on databricks.azure.cn by @jackyhu-db in #271
Full Changelog: 1.8.4...1.9.0
1.8.4
- Fix: proxy agent unintentionally overwrites protocol in URL (#241)
- Improve
Array.at
/TypedArray.at
polyfill (#242 by @barelyhuman) - UC Volume ingestion: stream files instead of loading them into memory (#247)
- UC Volume ingestion: improve behavior on SQL
REMOVE
(#249) - Expose session and query ID (#250)
- Make
lz4
module optional so package manager can skip it when cannot install (#246)
Full diff: 1.8.3...1.8.4
1.8.3
Full diff: 1.8.2...1.8.3
1.8.2
Improved results handling when running queries against older DBR versions (#232)
Full diff: 1.8.1...1.8.2
1.8.1
This is a security release which addresses issues with library dependencies
https://github.com/databricks/databricks-sql-nodejs/security/dependabot/34
An issue in all published versions of the NPM package ip allows an attacker to execute arbitrary code and
obtain sensitive information via the isPublic() function. This can lead to potential Server-Side Request
Forgery (SSRF) attacks. The core issue is the function's failure to accurately distinguish between
public and private IP addresses.
1.8.0
- Retry failed CloudFetch requests (#211)
- Fixed compatibility issues with Node@14 (#219)
- Support Databricks OAuth on Azure (#223) @jackyhu-db
- Support Databricks OAuth on GCP (#224)
- Support LZ4 compression for Arrow and CloudFetch results (#216)
- Fix OAuth M2M flow on Azure (#228)
Full diff: 1.7.1...1.8.0
OAuth on Azure
Some Azure instances now support Databricks native OAuth flow (in addition to AAD OAuth). For a backward
compatibility, library will continue using AAD OAuth flow by default. To use Databricks native OAuth,
pass useDatabricksOAuthInAzure: true
to client.connect()
:
client.connect({
// other options - host, port, etc.
authType: 'databricks-oauth',
useDatabricksOAuthInAzure: true,
// other OAuth options if needed
});
Also, we fixed issue with AAD OAuth when wrong scopes were passed for M2M flow.
OAuth on GCP
We enabled OAuth support on GCP instances. Since it uses Databricks native OAuth,
all the options are the same as for OAuth on AWS instances.
CloudFetch improvements
Now library will automatically attempt to retry failed CloudFetch requests. Currently, the retry strategy
is quite basic, but it is going to be improved in the future.
Also, we implemented a support for LZ4-compressed results (Arrow- and CloudFetch-based). It is enabled by default,
and compression will be used if server supports it.
1.7.1
This release contains a fix for the "Premature close" error which happened due to socket limit when intensively using library (#217)
Full diff: 1.7.0...1.7.1
1.7.0
Highlights
- Fixed behavior of
maxRows
option ofIOperation.fetchChunk()
. Now it will return chunks of requested size (#200) - Improved CloudFetch memory usage and overall performance (#204, #207, #209)
- Remove protocol version check when using query parameters (#213)
- Fix
IOperation.hasMoreRows()
behavior to avoid fetching data beyond the end of dataset. Also, now it will work properly prior to fetching first chunk (#205)
Full diff: 1.6.1...1.7.0
Query parameters support
In this release we also finally enable both named and ordinal query parameters support. Usage examples:
// obtain session object as usual
// Using named parameters
const operation = session.executeStatement('SELECT :p1 AS "str_param", :p2 AS "number_param"', {
namedParameters: {
p1: 'Hello, World',
p2: 3.14,
},
});
// Using ordinal parameters
const operation = session.executeStatement('SELECT ? AS "str_param", ? AS "number_param"', {
ordinalParameters: ['Hello, World', 3.14],
});
Please note that either named or ordinal parameters can be used in the single query, but not both simultaneously
CloudFetch performance improvements
This release includes various improvements to CloudFetch feature. It remains disabled by default, but we strongly encourage you to start using it:
// obtain session object as usual
// Using named parameters
const operation = session.executeStatement('...', {
useCloudFetch: true,
});
1.6.1
- Make default logger singleton (#199)
- Enable
canUseMultipleCatalogs
option when creating session (#203)
Full diff: 1.6.0...1.6.1
1.6.0
Highlights
- Added proxy support (#193)
- Added support for inferring NULL values passed as query parameters (#189)
- Fixed bug with NULL handling for Arrow results (#195)
Full diff: 1.5.0...1.6.0
Proxy support
This feature allows to pass through proxy all the requests library makes. By default, proxy is disabled.
To enable proxy, pass a configuration object to DBSQLClient.connect
:
client.connect({
// pass host, path, auth options as usual
proxy: {
protocol: 'http', // supported protocols: 'http', 'https', 'socks', 'socks4', 'socks4a', 'socks5', 'socks5h'
host: 'localhost', // proxy host (string)
port: 8070, // proxy port (number)
auth: { // optional proxy basic auth config
username: ...
password: ...
},
},
})
Note: using proxy settings from environment variables is currently not supported