| page_title | subcategory | description |
|---|---|---|
airbyte_source_mssql Resource - terraform-provider-airbyte |
SourceMssql Resource |
SourceMssql Resource
resource "airbyte_source_mssql" "my_source_mssql" {
configuration = {
additional_properties = {
key = {
# ...
}
}
additional_properties1 = "{ \"see\": \"documentation\" }"
check_privileges = false
checkpoint_target_interval_seconds = 4
concurrency = 4
database = "master"
host = "...my_host..."
jdbc_url_params = "...my_jdbc_url_params..."
password = "...my_password..."
port = 1433
replication_method = {
read_changes_using_change_data_capture_cdc = {
additional_properties = "{ \"see\": \"documentation\" }"
initial_load_timeout_hours = 4
initial_waiting_seconds = 0
invalid_cdc_cursor_position_behavior = "Re-sync data"
method = "CDC"
poll_interval_ms = 6
}
}
schemas = [
]
ssl_mode = {
unencrypted = {
additional_properties = "{ \"see\": \"documentation\" }"
mode = "unencrypted"
}
}
tunnel_method = {
no_tunnel = {
additional_properties = "{ \"see\": \"documentation\" }"
tunnel_method = "NO_TUNNEL"
}
}
username = "...my_username..."
}
definition_id = "3156776f-a553-4f83-b7be-07e1d515092f"
name = "...my_name..."
secret_id = "...my_secret_id..."
workspace_id = "89a5f137-cba1-4f2e-85cc-db4cd4426082"
}configuration(Attributes) The values required to configure the source. The schema for this must match the schema return by source_definition_specifications/get for the source. (see below for nested schema)name(String) Name of the source e.g. dev-mysql-instance.workspace_id(String)
definition_id(String) The UUID of the connector definition. One of configuration.sourceType or definitionId must be provided. Default: "b5ea17b1-f170-46dc-bc31-cc744ca984c1"; Requires replacement if changed.secret_id(String) Optional secretID obtained through the public API OAuth redirect flow. Requires replacement if changed.
created_at(Number)resource_allocation(Attributes) actor or actor definition specific resource requirements. if default is set, these are the requirements that should be set for ALL jobs run for this actor definition. it is overriden by the job type specific configurations. if not set, the platform will use defaults. these values will be overriden by configuration at the connection level. (see below for nested schema)source_id(String)source_type(String)
Required:
additional_properties(Map of Map of String)database(String) The name of the database.host(String) The hostname of the database.password(String, Sensitive) The password associated with the username.replication_method(Attributes) Configures how data is extracted from the database. (see below for nested schema)username(String) The username which is used to access the database.
Optional:
additional_properties1(String) Parsed as JSON.check_privileges(Boolean) When this feature is enabled, during schema discovery the connector will query each table or view individually to check access privileges and inaccessible tables, views, or columns therein will be removed. In large schemas, this might cause schema discovery to take too long, in which case it might be advisable to disable this feature. Default: truecheckpoint_target_interval_seconds(Number) How often (in seconds) a stream should checkpoint, when possible. Default: 300concurrency(Number) Maximum number of concurrent queries to the database.jdbc_url_params(String) Additional properties to pass to the JDBC URL string when connecting to the database formatted as 'key=value' pairs separated by the symbol '&'. (example: key1=value1&key2=value2&key3=value3).port(Number) The port of the database. Default: 1433schemas(List of String) The list of schemas to sync from. If not specified, all schemas will be discovered. Case sensitive.ssl_mode(Attributes) The encryption method which is used when communicating with the database. (see below for nested schema)tunnel_method(Attributes) Whether to initiate an SSH tunnel before connecting to the database, and if so, which kind of authentication to use. (see below for nested schema)
Optional:
read_changes_using_change_data_capture_cdc(Attributes) Recommended - Incrementally reads new inserts, updates, and deletes using MSSQL's change data capture feature. This must be enabled on your database. (see below for nested schema)scan_changes_with_user_defined_cursor(Attributes) Incrementally detects new inserts and updates using the cursor column chosen when configuring a connection (e.g. created_at, updated_at). (see below for nested schema)
Optional:
additional_properties(String) Parsed as JSON.initial_load_timeout_hours(Number) The amount of time an initial load is allowed to continue for before catching up on CDC logs. Default: 8initial_waiting_seconds(Number) The amount of time the connector will wait when it launches to determine if there is new data to sync or not. Defaults to 300 seconds. Valid range: 120 seconds to 3600 seconds. Read about initial waiting timeinvalid_cdc_cursor_position_behavior(String) Determines whether Airbyte should fail or re-sync data in case of an stale/invalid cursor value in the mined logs. If 'Fail sync' is chosen, a user will have to manually reset the connection before being able to continue syncing data. If 'Re-sync data' is chosen, Airbyte will automatically trigger a refresh but could lead to higher cloud costs and data loss. Default: "Fail sync"; must be one of ["Fail sync", "Re-sync data"]method(String) Default: "CDC"; must be "CDC"poll_interval_ms(Number) How often (in milliseconds) Debezium should poll for new data. Must be smaller than heartbeat interval (15000ms). Lower values provide more responsive data capture but may increase database load. Default: 500
Optional:
additional_properties(String) Parsed as JSON.exclude_todays_data(Boolean) When enabled incremental syncs using a cursor of a temporal type (date or datetime) will include cursor values only up until the previous midnight UTC. Default: falsemethod(String) Default: "STANDARD"; must be "STANDARD"
Optional:
encrypted_trust_server_certificate(Attributes) Use the certificate provided by the server without verification. (For testing purposes only!) (see below for nested schema)encrypted_verify_certificate(Attributes) Verify and use the certificate provided by the server. (see below for nested schema)unencrypted(Attributes) Data transfer will not be encrypted. (see below for nested schema)
Optional:
additional_properties(String) Parsed as JSON.mode(String) Default: "encrypted_trust_server_certificate"; must be "encrypted_trust_server_certificate"
Optional:
additional_properties(String) Parsed as JSON.certificate(String, Sensitive) certificate of the server, or of the CA that signed the server certificatehost_name_in_certificate(String) Specifies the host name of the server. The value of this property must match the subject property of the certificate.mode(String) Default: "encrypted_verify_certificate"; must be "encrypted_verify_certificate"
Optional:
additional_properties(String) Parsed as JSON.mode(String) Default: "unencrypted"; must be "unencrypted"
Optional:
no_tunnel(Attributes) No ssh tunnel needed to connect to database (see below for nested schema)password_authentication(Attributes) Connect through a jump server tunnel host using username and password authentication (see below for nested schema)ssh_key_authentication(Attributes) Connect through a jump server tunnel host using username and ssh key (see below for nested schema)
Optional:
additional_properties(String) Parsed as JSON.tunnel_method(String) Default: "NO_TUNNEL"; must be "NO_TUNNEL"
Required:
tunnel_host(String) Hostname of the jump server host that allows inbound ssh tunnel.tunnel_user(String) OS-level username for logging into the jump server hosttunnel_user_password(String, Sensitive) OS-level password for logging into the jump server host
Optional:
additional_properties(String) Parsed as JSON.tunnel_method(String) Default: "SSH_PASSWORD_AUTH"; must be "SSH_PASSWORD_AUTH"tunnel_port(Number) Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
Required:
ssh_key(String, Sensitive) OS-level user account ssh key credentials in RSA PEM format ( created with ssh-keygen -t rsa -m PEM -f myuser_rsa )tunnel_host(String) Hostname of the jump server host that allows inbound ssh tunnel.tunnel_user(String) OS-level username for logging into the jump server host
Optional:
additional_properties(String) Parsed as JSON.tunnel_method(String) Default: "SSH_KEY_AUTH"; must be "SSH_KEY_AUTH"tunnel_port(Number) Port on the proxy/jump server that accepts inbound ssh connections. Default: 22
Read-Only:
default(Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)job_specific(Attributes List) (see below for nested schema)
Read-Only:
cpu_limit(String)cpu_request(String)ephemeral_storage_limit(String)ephemeral_storage_request(String)memory_limit(String)memory_request(String)
Read-Only:
job_type(String) enum that describes the different types of jobs that the platform runs.resource_requirements(Attributes) optional resource requirements to run workers (blank for unbounded allocations) (see below for nested schema)
Read-Only:
cpu_limit(String)cpu_request(String)ephemeral_storage_limit(String)ephemeral_storage_request(String)memory_limit(String)memory_request(String)
Import is supported using the following syntax:
In Terraform v1.5.0 and later, the import block can be used with the id attribute, for example:
import {
to = airbyte_source_mssql.my_airbyte_source_mssql
id = "..."
}The terraform import command can be used, for example:
terraform import airbyte_source_mssql.my_airbyte_source_mssql "..."