Set up the Openflow Connector for Jira Cloud

Note

The connector is subject to the Connector Terms.

This topic describes the steps to set up the Openflow Connector for Jira Cloud.

Prerequisites

  1. Ensure that you have reviewed About Openflow Connector for Jira Cloud.

  2. Ensure that you have set up a Openflow.

Get the credentials

As a Jira Cloud administrator, perform the following tasks in your Atlassian account:

  1. Navigate to the API tokens page (https://id.atlassian.com/manage-profile/security/api-tokens).

  2. Select Create API token.

  3. In the Create an API token dialog box, provide a descriptive name for the API token and select an expiration date for the API token. This can range from 1 to 365 days.

  4. Select Create.

  5. In the Copy your API token dialog box, select Copy to copy your generated API token and then paste the token to the connector parameters, or save it securely.

  6. Select Done to close the dialog box.

Set up Snowflake account

As a Snowflake account administrator, perform the following tasks:

  1. Create a new role or use an existing role and grant the Database privileges.

  2. Create a new Snowflake service user with the type as SERVICE.

  3. Grant the Snowflake service user the role you created in the previous steps.

  4. Configure with key-pair auth for the Snowflake SERVICE user from step 2.

  5. Snowflake strongly recommends this step. Configure a secrets manager supported by Openflow, for example, AWS, Azure, and Hashicorp, and store the public and private keys in the secret store.

    Note

    If for any reason, you do not wish to use a secrets manager, then you are responsible for safeguarding the public key and private key files used for key-pair authentication according to the security policies of your organization.

    1. Once the secrets manager is configured, determine how you will authenticate to it. On AWS, it’s recommended that you the EC2 instance role associated with Openflow as this way no other secrets have to be persisted.

    2. In Openflow, configure a Parameter Provider associated with this Secrets Manager, from the hamburger menu in the upper right. Navigate to Controller Settings » Parameter Provider and then fetch your parameter values.

    3. At this point all credentials can be referenced with the associated parameter paths and no sensitive values need to be persisted within Openflow.

  6. If any other Snowflake users require access to the raw ingested documents and tables ingested by the connector (for example, for custom processing in Snowflake), then grant those users the role created in step 1.

  7. Designate a warehouse for the connector to use. Start with the smallest warehouse size, then experiment with size depending on the number of tables being replicated, and the amount of data transferred. Large table numbers typically scale better with multi-cluster warehouses, rather than larger warehouse sizes.

Configure the connector

As a data engineer, perform the following tasks to configure a connector:

  1. Create a database and schema in Snowflake for the connector to store ingested data.

  2. Download the connector definition file.

  3. Import the connector definition into Openflow:

    1. Open the Snowflake Openflow canvas.

    2. Add a process group. To do this, drag and drop the Process Group icon from the tool palette at the top of the page onto the canvas. Once you release your pointer, a Create Process Group dialog appears.

    3. On the Create Process Group dialog, select the connector definition file to import.

  4. Right-click on the imported process group and select Parameters.

  5. Populate the required parameter values as described in Flow parameters.

Flow parameters

This section describes the flow parameters that you can configure.

Parameter

Description

Required

Destination Account

The Snowflake account in the [organization-name]-[account-name] format, where data retrieved from the Jira Cloud API is stored

Yes

Snowflake Private Key

The RSA private key used for authentication. The RSA key must be formatted according to PKCS8 standards and have standard PEM headers and footers. Note that either Snowflake Private Key File or Snowflake Private Key must be defined.

No

Snowflake Private Key File

The file that contains the RSA private key used for authentication to Snowflake, which is formatted according to PKCS8 standards and has standard PEM headers and footers. The header line starts with -----BEGIN PRIVATE.

No

Snowflake Private Key Password

The password associated with the Snowflake Private Key File

No

Destination Role

Snowflake role

Yes

Destination User

Snowflake user

Yes

Destination Warehouse

Snowflake warehouse name

Yes

Destination Database

Snowflake destination database name. Databases must be created in advance.

Yes

Destination Schema

Snowflake destination schema. Schema must be created in advance.

Yes

Destination Table

Snowflake table name that will be used to store the issue data fetched from the JIRA Cloud. The table will be created automatically.

Yes

Authorization Method

Authorization method for Jira Cloud API. Default value: BASIC.

Yes

Jira Email

Email address for the Atlassian account. Visible only when Authorization Method is BASIC.

Yes

Jira API Token

API access token for your Atlassian Jira account. Visible only when Authorization Method is BASIC.

Yes

Environment URL

URL to the Atlassian Jira environment

Yes

Search Type

Type of search to perform. It has one of these possible values SIMPLE and JQL. Default value: SIMPLE.

Yes

Jql query

A JQL query. Visible only when Search Type is JQL.

Yes

Project Name

You can search for issues belonging to a particular project by project name, project key, or project ID. Visible only when Search Type is SIMPLE.

Yes

Status Category

Status category filter for simple search. Visible only when Search Type is SIMPLE.

No

Updated After

Filter issues updated after a specified date and time. Visible only when Search Type is SIMPLE.

No

Created After

Filter issues created after a specified date and time. Visible only when Search Type is SIMPLE.

No

Issue Fields

A list of fields to return for each issue, which is used to retrieve a subset of fields. This parameter accepts a comma-separated list. Default value: all.

No

Maximum Page Size

The maximum number of items to return per page. Default value: 200.

No

Run the flow

  1. Right-click on the plane and select Enable all Controller Services.

  2. Right-click on the imported process group and select Start. The connector starts the data ingestion.

Language: English