Developer Center

Resources to get you started with Algorithmia

Configure Logging Output Plugins

Updated

Overview

In the past, logs from all Algorithmia services were forwarded to ElasticSearch via Logstash. Acknowledging that customers want the flexibility to use their own pre-existing log-aggregation systems, we’ve now added support for Logstash output plugins so that Algorithmia can be configured, through the installer itself, to forward logs to other external systems. This provides developers a more convenient way to access logging output for key operations on the platform, including the algorithm build process and the process of testing the runtime behavior of an already built algorithm.

The key changes are as follows:

  1. Logging output plugins and associated configuration variables have been added to the Logstash installer, and the config file is generated based on those variables.
  2. Sensitive files (instance private key files, certificates, etc.) required for the Logstash configuration can now be added through the installer and will be automatically mounted as secrets.

Steps

Generating a new config file

Select which plugin you’ll be using, for example the Syslog output plugin or the Kafka output plugin, and add the appropriate configuration values. A new config file will be generated based on these variables.

Mounting secrets

To mount your secrets, the steps are as follows:

1. Add your secret files to the {deployment}/services/{service}/secrets/ directory. Once step 2 is completed, files here will automatically be turned into secrets and mounted. You can also modify this to generate the secret in JKS format.

2. Add the file names to the list of files for the secret_mount_file_names installer variable, located in the unilog-logstash schema ref.

plan.services['unilog-logstash'].secret_mount_file_names

 

3. Delete and redeploy the current StatefulSet using the installer for the changes to take effect.

Below are listed the complete set of variables that are supported in the unilog-logstash schema ref.

    "unilog_syslog_output_enabled"
      "type" "boolean"
      "default" "false"

    "unilog_syslog_host"
      "type" "string"

    "unilog_syslog_port"
      "type" "integer"
      "default" 514

    "unilog_syslog_security_protocol"
      "type" "string"
      "default" "ssl-tcp"

    "unilog_syslog_key_file_name"
      "type" "string"

    "unilog_syslog_key_passphrase"
      "type" "string"

    "unilog_syslog_cert_file_name"
      "type" "string"

    "unilog_syslog_cacert_file_name"
      "type" "string"

    "unilog_syslog_codec"
      "type" "string"
      "default" "json"

    # used to set an if statement to filter logs based on content
    "unilog_syslog_if_statement"
      "type" "string"

    "unilog_kafka_output_enabled"
      "type" "boolean"
      "default" "false"

    "unilog_kafka_trust_file_name"
      "type" "string"

    "unilog_kafka_truststore_password"
      "type" "string"

    "unilog_kafka_truststore_type"
      "type" "string"
      "default" "JKS"

    "unilog_kafka_key_file_name"
      "type" "string"

    "unilog_kafka_key_password"
      "type" "string"

    "unilog_kafka_jaas_file_name"
      "type" "string"

    "unilog_kafka_id"
      "type" "string"

    "unilog_kafka_topic"
      "type" "string"

    "unilog_kafka_codec"
      "type" "string"
      "default" "json"

    "unilog_kafka_security_protocol"
      "type" "string"

    "unilog_kafka_bootstrap_server"
      "type" "string"

    "unilog_kafka_if_statement"
        "type" "string"

    "unilog_input_generic"
      "type" "string"

    "unilog_filter_generic"
      "type" "string"

    "unilog_output_generic"
      "type" "string"

    "secret_mount_file_names"
      "type" "string"

 

Optional filtering

The following is an example of how some of these variables might be used if you'd like to filter the log output such that logs from specific services are forwarded to specific outputs. Specifically, this uses an if statement to put only algorithm build and execution logs into the Kafka output, but all logs still flow to syslog.

if[kubernetes.labels.app] == "langserver" and [kubernetes.container.name] == “algo-builder”{
  kafka {
    bootstrap_server => "exampleserver:8080"
    topic_id => "example_topic"
    security_protocol => "SASL_SSL"
    ssl_keystore_location => "/log-export-config/kafka.example.keystore.jks"
    ssl_keystore_password => “examplepass”
    ssl_keystore_type => “JKS”
    ssl_key_password => “examplepass”
    ssl_truststore_location => "/log-export-config/kafka.example.truststore.jks"
    ssl_truststore_password => “examplepass”
    ssl_truststore_type => “JKS”
    jaas_path => "/log-export-config/kafka.example.sasl.jaas"
    security_protocol => "SASL_SSL"
  }
}
  syslog {
    host => "syslog.host.foo"
    port => "443"
    protocol => "ssl-tcp"
    ssl_cert => "/log-export-config/client-certificate.crt"
    ssl_key => "/log-export-config/client-certificate.key"
    ssl_cacert => "/log-export-config/client-ca-chain.crt"
    rfc => "rfc3164"
  }