CREATE SERVER

CREATE SERVER creates an instance of a plugin with these options to access a foreign data source. A server object contains connection information that a foreign-data wrapper uses to access the external data source.

After you run CREATE SERVER, you can reference SERVER in a CREATE FOREIGN STREAM or CREATE FOREIGN TABLE statement,

Server options required depend on the data wrapper used. Most foreign data sources use one of two data wrappers:

  • ECDA. The Extensible Common Data Adapter, which lets you read and write data from s-Server to and from a variety of sources, including the file system, network sockets, AMQP, and Kafka. Options.
  • SYS_JDBC. A plugin implementing SQL/MED access to any foreign DBMS for which a JDBC driver is available. Options.

To read from or write to a foreign data source, you use the Extensible Common Data Adapter. For more information on the code structure for writing to sources using this adapter, see the topic Reading from Other Sources in the Integrating Guavus SQLstream with Other Systems.

For example, the following code defines an Extensible Common Data Adapter server of type "file", for reading over the file system:

CREATE OR REPLACE SERVER "FileReaderServer" TYPE 'FILE'
FOREIGN DATA WRAPPER ECDA;

Syntax

Using Prebuilt Server Objects

S-Server ships with a number of prebuilt server objects for the Extensible Common Data Adapter. In many cases, these objects will work fine for creating foreign streams for ECDA sources and sinks. You generally only need to create your own server object for ECDA sources and sinks if you know you will have multiple foreign streams that share options. In this case, it may be more efficient to define a custom server object with these shared options. Doing so allows foreign streams that invoke the custom server object to inherit options. See Using SQL/MED Inheritance with Server objects.

Prebuilt Server Objects Available in s-Server

This table lists the ECD plugin types, and the corresponding prebuilt servers.

The 'Type' is used in the TYPE clause when creating a SERVER, and with the --io switch when running an ECD agent.

Type Plugin Description Name of Prebuilt Server for Foreign Streams Notes
amqp_legacy AMQP Legacy: Reads to and writes to AMQP message bus for AMQP 0.9. See the topics Reading from AMQP and Writing to AMQP for more details. AMQP_LEGACY_SERVER
amqp10 AMQP 1.0: Reads to and writes to AMQP message bus for AMQP 1.0. See the topics Reading from AMQP and Writing to AMQP for more details. AMQP10_SERVER
hdfs Allows s-Server to write to the Hadoop/HDFS file system (also supports ADLS Gen 2). See Writing to Hadoop for more details. HDFS_SERVER sink only
hive Allows s-Server to write to Hadoop/Hive tables. See Writing to Hive Tables for more details. HIVE_SERVER sink only; ORC format only
http Allows s-Server to read and write data over HTTP / HTTPS. See the topics Reading over HTTP and Writing over HTTP for more details. HTTP_SERVER
file File System: Reading and writing over the file system. See the topics Reading from the File System and Writing to the File System for more details. FILE_SERVER
filevfs Allows s-Server to read files in the compressed format from ADLS Gen2, HDFS, local file system, S3 and SFTP. See the topics Integrating Files using VFS for more details. FILE_VFS_SERVER source only
ibmmq Allows s-Server to read from or write to queues and topics in IBM MQ. See Reading from IBM MQ and Writing to IBM MQ for more details. IBMMQ_SERVER
kafka Allows s-Server to exchange data with Kafka clusters. See the topics Reading from Kafka and Writing to Kafka for more details. KAFKA_SERVER non-transactional
kafka10 Allows s-Server to exchange data with Kafka clusters versions 0.1.0 and above. See the topics Reading from Kafka and Writing to Kafka for more details. Also supports Azure Event Hubs. KAFKA10_SERVER versions since 0.10.2
kinesis Allows s-Server to exchange data with Kinesis streams. See the topics Reading from Kinesis and Writing to Kinesis for more details. KINESIS_SERVER
pulsar Using s-Server, Using s-Server, you can read from and write to Pulsar streams. See the topic Integrating Pulsar for more details. PULSAR_SERVER
mail Allows s-Server to connect to an SMTP server in order to send emails. See the topic Writing to Mail Servers for details. MAIL_SERVER sink only
mongodb Allows s-Server to write to MongoDB. See the topic Writing to MongoDB for more details. MONGO_SERVER sink only
mqtt Allows s-Server to exchange data with MQTT brokers. See the topics Reading from MQTT and Writing to MQTT for more details. MQTT_SERVER
snowflake Allows s-Server to write to Snowflake warehouses. See the topic Writing to Snowflake for more details. SNOWFLAKE_SERVER sink only; uploads files
net Network Sockets (TCP/UDP): Configured for a socket. Reads or writes data streamed from a client program using TCP or UDP. See the topics Reading from Network Sockets and Writing to Network Sockets for more details. NET_SERVER
websocket Allows s-Server to read and write data over web sockets. See the topics Reading from Websockets and Writing to Websockets for more details. WEBSOCKET_SERVER

NOTES:

  • You can select a list of all servers (including these pre-defined servers):
    sqllineClient --run=$SQLSTREAM_HOME/support/sql/showForeignServers

Server Definition Options for Extensible Common Data Adapter

All server declarations for the Extensible Common Data Adapter must declare a type. The Type parameter accepts the types listed above.

Server Definition Options for RDBMS Databases

The table below lists options for reading from and writing to RDBMS databases. See the topics Reading Data from RDBMS Sources and Writing Data to RDBMS Destinations in the Integrating Guavus SQLstream with Other Systems for more details.

Name Input / Output Description
DRIVER Both Class name of the JDBC Driver for the remote database. Fully-qualified name of the JDBC driver class to load. This must be available on the classpath.
URI Both JDBC URI
USER_NAME Both Remote database user.
PASSWORD Both Remote database password
DIALECT Both Type of database, generally auto detected. Valid values are: Oracle, POSTGRES, MYSQL, TERADATA, SQL SERVER
TERADATA_QUERY_BAND Input Optional, and for Teradata connections only. Teradata query bands "tag" the query. For more information, see the Teradata website. This is run before the actual query. The submitted SQL looks like the following: SET QUERY BAND = 'ApplicationName=SQLstream_s-Server;Version=;' FOR SESSION
queryCol Tailing Input Name of the column to use for a highwater mark, such as ROWTIME. No default. See Tailing tables.
pollingInterval Tailing Input In milliseconds, how often to sleep when now new rows are available. Default is 1000. See Tailing tables.
txInterval Tailing Input How many rows to wait before reading a row, in order to ensure that no rows are missed as a result of rollbacks or dirty reads. If queryCol is ROWTIME, you should consider setting txInterval to 1000. Larger values add latency to the query, but make the stream more consistent (safer) in the face of crashes and multiple inserters to the remote database's table. Smaller values add less latency, may result in a missed row, in the remote chance of a dirty read. See Tailing tables.
TRANSACTION_ROWTIME_LIMIT Output The number of milliseconds which may elapse between commits. Defaults to 1000 ms (1 second). See Managing Commits. May also be overridden by an INSERT or MERGE ... WITH OPTIONS clause.
TRANSACTION_ROW_LIMIT Output The number of rows to batch up before committing. Defaults to 0. See Managing Commits. May also be overridden by an INSERT or MERGE ... WITH OPTIONS clause.
JNDI_WRITEBACK Both True/False. Defaults to "false". When you specify the JNDI_WRITEBACK option as "true" in the DDL or in the .properties file, s-Server writes back consolidated options (including options inherited from the parent as well as from the properties file) to the same properties file under $SQLSTREAM_HOME/plugin/jndi. This lets you configure the JNDI file using DDL.
SCHEMA_NAME Both Name of the remote schema.
TABLE_NAME Both Name of the remote table. Normally this is specified only at FOREIGN TABLE / FOREIGN STREAM level, not against the SERVER.
OPTIONS_QUERY Both Optional. Lets you query a table to set one or more of the required options at runtime. You could use this, for example, to set suitable connection options using select URI, "SCHEMA", USERNAME, PASSWORD from TEST.rdbms_options. For more details see the topic Using the Options Query Property. This is usually applied to the FOREIGN TABLE rather than the SERVER

Example

Here is an example that configures a connection to a file server. See Writing to the File System for more information.

CREATE OR REPLACE SERVER "FileReaderServer" TYPE 'FILE'
FOREIGN DATA WRAPPER ECDA
OPTIONS
(directory 'myDirectory',
//directory for the file
formatter 'CSV',
filename_pattern 'myRecord.csv',
//regex for filename pattern to look for
character_encoding 'UTF-8',
write_header 'true');

Note: ECD Adapter server definitions need to reference the ECD foreign data wrapper. You can do so with the syntax FOREIGN DATA WRAPPER ECDA.

The following code block defines a JDBC connection for a MySQL database.

CREATE OR REPLACE SERVER mysql_reader
FOREIGN DATA WRAPPER SYS_JDBC
OPTIONS (
DRIVER_CLASS 'com.mysql.jdbc.Driver',
URL 'jdbc:mysql://localhost:3306/sample',
USER_NAME 'sqlstream',
PASSWORD 'sqlstream'
);

Using SQL/MED Inheritance with Server Objects

SQLstream's implementation of the SQL/MED framework supports OPTIONS inheritance. That is, each SQL/MED object inherits OPTIONS of its parent/ancestor unless specified in the OPTIONS clause of the object itself.

Specifically, this means that you can specify options in a SERVER object that will be passed through to foreign streams invoking the SERVER object.

The following example defines a SERVER object with a SEED_BROKERS option for a Kafka source/sink:

CREATE OR REPLACE SERVER EMEA_KAFKA_SERVER TYPE 'KAFKA10'
OPTIONS (
SEED_BROKERS 'dcserver1.mycompany.co.uk:9092,dcserver2.mycompany.co.uk:9092'
);

CREATE OR REPLACE FOREIGN STREAM emea.inventory_updates (
  kafka_offset BIGINT NOT NULL,
  "ts" TIMESTAMP NOT NULL,
  "KAFKA_PARTITION" INT NOT NULL,
  --special column for Kafka partition
  "item" CHAR(200) NOT NULL,
  "item_number" DOUBLE
)
SERVER EMEA_KAFKA_SERVER
OPTIONS (
TOPIC 'emea_inventory_updates'
STARTING_TIME 'LATEST'
max_poll_records '2MB'
parser 'CSV',
character_encoding 'UTF-8');
);

In this case, the foreign stream emea.inventory_updates inherits the SEED_BROKERS option from EMEA_KAFKA_SERVER. Every foreign stream that invokes EMEA_KAFKA_SERVER will similarly inherit this option, unless the foreign stream itself specifies SEED_BROKERS as an option. In that case, the foreign stream option SEED_BROKERS will override the SEED_BROKERS option in the SERVER object.

This approach has several advantages. If you know that a number of foreign streams share configuration options, you can specify these once in a SERVER object, and then change them as needed in the same SERVER object.

Here, an application developer writing a foreign stream may not even know hostnames in the Kafka cluster. The developer simply uses EMEA_KAFKA_SERVER to create multiple foreign streams, all of which will inherit the SEED_BROKERS option from the foreign server. A single change to EMEA_KAFKA_SERVER definition can allow update of SEED_BROKERS property for all foreign streams quickly.

Using Properties Files

With this options inheritance in place, the SQL/MED framework also supports loading some of these options through .properties files. SQLstream s-Server looks for these .properties files in $SQLSTREAM_HOME/plugin/jndi/.properties.

You, or your system administrator, can create a file called $SQLSTREAM_HOME/plugin/jndi/<server_name>.properties. Properties from this file will be automatically imported as OPTIONS in a foreign server with the name <server_name>.

For the EMEA.INVENTORY_UPDATES stream defined above, the file path would be $SQLSTREAM_HOME/plugin/jndi/LOCALDB.EMEA.INVENTORY_UPDATES.properties. Please note that the file name will be uppercased if the object name is NOT quoted. We recommend quoting this option.

For EMEA_KAFKA_SERVER, the path would be $SQLSTREAM_HOME/plugin/jndi/EMEA_KAFKA_SERVER.properties

For the configuration above, app developers can simply create these .properties files with appropriate names and those properties will be automatically imported when SQL/MED objects are created. Any options explicitly defined in the OPTIONS clause of the DDL will override those options defined in the .properties file.

OPTIONS inheritance works in the following order for each SQL/MED object:

  1. Inherit options from the parent.
  2. Override options from the <fully_qualified_objectname.properties> file.
  3. Override options explicitly specified in the OPTIONS clause of the DDL.

You can use properties files to abstract away certain options or hide connection credentials from app developers.

Using the JNDI_WRITEBACK Option

This option enables system administrators to persist options supplied in the OPTIONS clause of the DDL to the <objectname>.properties file. This allows administrators to supply connection credentials through the DDL once and then disable the JNDI_WRITEBACK option. Resubmitting the DDL will eliminate options that need to be hidden away from the DDL statements.