Skip to main content

JDBC

Reads data from other databases using JDBC tables and writes data into other databases using jdbc .

Source

Source Parameters

ParameterDescriptionRequiredDefault
Credential TypeCredential Type provider (Databricks Secrets or Username/Password). These can be set in the config options/via databricks secrets for security purpose, so that it's not visible in code.True(none)
Credential ScopeScope to use for databricks secretsFalse(none)
urlThe JDBC URL of the form jdbc:subprotocol:subname to connect to. The source-specific connection properties may be specified in the URL. e.g.,
jdbc:postgresql://test.us-east-1.rds.amazonaws.com:5432/postgres,
jdbc:mysql://database-mysql.test.us-east-1.rds.amazonaws.com:3306/mysql
True(none)
dbtableThe JDBC table that should be read from. Note that when using it in the read path anything that is valid in a FROM clause of a SQL query can be used. For example, instead of a full table you could also use a subquery in parentheses. It is not allowed to specify dbtable and query options at the same time. e.g., db_name.table_name or (select col1, col2 from table) as AFalse(none)
queryA query that will be used to read data into Spark. The specified query will be parenthesized and used as a subquery in the FROM clause. Spark will also assign an alias to the subquery clause. As an example, spark will issue a query of the following form to the JDBC Source. SELECT columns FROM (<user_specified_query>) spark_gen_alias.

Below are a couple of restrictions while using this option.
1. It is not allowed to specify query and partitionColumn options at the same time.
2. When specifying partitionColumn option is required, the subquery can be specified using dbtable option instead and partition columns can be qualified using the subquery alias provided as part of dbtable. Example: spark.read.format("jdbc") .option("url", jdbcUrl) .option("query", "select c1, c2 from t1") .load()
False(none)
driverThe class name of the JDBC driver to use to connect to this URL. e.g.,
For postgres : org.postgresql.Driver
For mysql: com.mysql.cj.jdbc.Driver
True(none)
Partition Column, Lower Bound, Upper BoundThese options must all be specified if any of them is specified. In addition, numPartitions must be specified. They describe how to partition the table when reading in parallel from multiple workers. partitionColumn must be a numeric, date, or timestamp column from the table in question. Notice that lowerBound and upperBound are just used to decide the partition stride, not for filtering the rows in table. So all rows in the table will be partitioned and returned. This option applies only to reading.
Note: Dropdown to choose column in partition column would come only once schema is inferred.
False(none)
Number of partitionsThe maximum number of partitions that can be used for parallelism in table reading. This also determines the maximum number of concurrent JDBC connections.False(none)
Query TimeoutThe number of seconds the driver will wait for a Statement object to execute to the given number of seconds. Zero means there is no limit. In the write path, this option depends on how JDBC drivers implement the API setQueryTimeout, e.g., the h2 JDBC driver checks the timeout of each query instead of an entire JDBC batch.False0
Fetch sizeThe JDBC fetch size, which determines how many rows to fetch per round trip. This can help performance on JDBC drivers which default to low fetch size (e.g. Oracle with 10 rows).False0
Session Init StatementAfter each database session is opened to the remote DB and before starting to read data, this option executes a custom SQL statement (or a PL/SQL block). Use this to implement session initialization code. Example: option("sessionInitStatement", """BEGIN execute immediate 'alter session set "_serial_direct_read"=true'; END;""")False(none)
Push-Down PredicateThe option to enable or disable predicate push-down into the JDBC data source. The default value is true, in which case Spark will push down filters to the JDBC data source as much as possible. Otherwise, if set to false, no filter will be pushed down to the JDBC data source and thus all filters will be handled by Spark. Predicate push-down is usually turned off when the predicate filtering is performed faster by Spark than by the JDBC data source.FalseTRUE
Push-Down AggregateThe option to enable or disable aggregate push-down in V2 JDBC data source. The default value is false, in which case Spark will not push down aggregates to the JDBC data source. Otherwise, if sets to true, aggregates will be pushed down to the JDBC data source. Aggregate push-down is usually turned off when the aggregate is performed faster by Spark than by the JDBC data source. Please note that aggregates can be pushed down if and only if all the aggregate functions and the related filters can be pushed down. Spark assumes that the data source can't fully complete the aggregate and does a final aggregate over the data source output.FalseFALSE
note

Please add the jdbc driver carefully. If you get class not found error during running of pipeline then your dependency might be missing in the cluster. To read more about how to add dependencies for specific jdbc jar click here

Source Example

Generated Code

def Source(spark: SparkSession) -> DataFrame:
return spark.read\
.format("jdbc")\
.option("url", f"{Config.jdbc_url}")\
.option("user", f"{Config.jdbc_username}")\
.option("password", f"{Config.jdbc_password}")\
.option("dbtable", "public.demo_customers_raw")\
.option("pushDownPredicate", True)\
.option("driver", "org.postgresql.Driver")\
.load()

Target

Target Parameters

ParameterDescriptionRequiredDefault
Credential TypeCredential Type provider (Databricks Secrets or Username/Password). These can be set in the config options/via databricks secrets for security purpose, so that it's not visible in code.True(none)
Credential ScopeScope to use for databricks secretsFalse(none)
urlThe JDBC URL of the form jdbc:subprotocol:subname to connect to. The source-specific connection properties may be specified in the URL. e.g.,
jdbc:postgresql://test.us-east-1.rds.amazonaws.com:5432/postgres,
jdbc:mysql://database-mysql.test.us-east-1.rds.amazonaws.com:3306/mysql
True(none)
tableThe JDBC table that should be written into.True(none)
driverThe class name of the JDBC driver to use to connect to this URL. e.g.,
For postgres : org.postgresql.Driver
For mysql: com.mysql.cj.jdbc.Driver
True(none)
Number of PartitionsThe maximum number of partitions that can be used for parallelism in table writing. This also determines the maximum number of concurrent JDBC connections. If the number of partitions to write exceeds this limit, we decrease it to this limit by calling coalesce(numPartitions) before writing.False(none)
Query TimeoutThe number of seconds the driver will wait for a Statement object to execute to the given number of seconds. Zero means there is no limit. In the write path, this option depends on how JDBC drivers implement the API setQueryTimeout, e.g., the h2 JDBC driver checks the timeout of each query instead of an entire JDBC batch.False0
Batch SizeThe JDBC batch size, which determines how many rows to insert per round trip. This can help performance on JDBC drivers. This option applies only to writing.False1000
Isolation LevelThe transaction isolation level, which applies to current connection. It can be one of NONE, READ_COMMITTED, READ_UNCOMMITTED, REPEATABLE_READ, or SERIALIZABLE, corresponding to standard transaction isolation levels defined by JDBC's Connection object, with default of READ_UNCOMMITTED. Please refer the documentation in java.sql.Connection.FalseREAD_UNCOMMITTED
TruncateWhen SaveMode.Overwrite is enabled, this option causes Spark to truncate an existing table instead of dropping and recreating it. This can be more efficient, and prevents the table metadata (e.g., indices) from being removed. However, it will not work in some cases, such as when the new data has a different schema. In case of failures, users should turn off truncate option to use DROP TABLE again. Also, due to the different behavior of TRUNCATE TABLE among DBMS, it's not always safe to use this. MySQLDialect, DB2Dialect, MsSqlServerDialect, DerbyDialect, and OracleDialect supports this while PostgresDialect and default JDBCDirect doesn't. For unknown and unsupported JDBCDirect, the user option truncate is ignored.FalseFALSE
Cascade TruncateIf enabled and supported by the JDBC database (PostgreSQL and Oracle at the moment), this options allows execution of a TRUNCATE TABLE t CASCADE (in the case of PostgreSQL a TRUNCATE TABLE ONLY t CASCADE is executed to prevent inadvertently truncating descendant tables). This will affect other tables, and thus should be used with care.Falsethe default cascading truncate behaviour of the JDBC database in question, specified in the isCascadeTruncate in each JDBCDialect
Create Table OptionsIf specified, this option allows setting of database-specific table and partition options when creating a table (e.g., CREATE TABLE t (name string) ENGINE=InnoDB.).False
Create Table Column TypesThe database column data types to use instead of the defaults, when creating the table. Data type information should be specified in the same format as CREATE TABLE columns syntax (e.g: "name CHAR(64), comments VARCHAR(1024)"). The specified types should be valid spark sql data types.False(none)

Below are different type of write modes which prophecy provided hive catalog supports.

Write ModeDescription
overwriteIf data already exists, existing data is expected to be overwritten by the contents of the DataFrame.
appendIf data already exists, contents of the DataFrame are expected to be appended to existing data.
ignoreIf data already exists, the save operation is expected not to save the contents of the DataFrame and not to change the existing data. This is similar to a CREATE TABLE IF NOT EXISTS in SQL.
errorIf data already exists, an exception is expected to be thrown.

Target Example

Generated Code

def Target(spark: SparkSession, in0: DataFrame):
in0.write\
.format("jdbc")\
.option("url", f"{Config.jdbc_url}")\
.option("dbtable", "public.demo_customers_raw_output")\
.option("user", f"{Config.jdbc_username}")\
.option("password", f"{Config.jdbc_password}")\
.option("driver", "org.postgresql.Driver")\
.save()