Set up the Openflow Connector for LinkedIn Ads¶
Note
The connector is subject to the Connector Terms.
This topic describes the steps to set up the Openflow Connector for LinkedIn Ads.
Prerequisites¶
Ensure that you have reviewed About Openflow Connector for LinkedIn Ads.
Ensure that you have set up Openflow.
Get the credentials¶
As a LinkedIn Ads user, perform the following tasks:
Optional: If you don’t have an ad account to run and manage campaigns, create one (link removed).
Ensure that the user account (link removed) has at least a VIEWER role on the ad account.
Use the user account to apply for Advertising API access. For more information, see the Microsoft quick start (https://learn.microsoft.com/en-us/linkedin/marketing/quick-start?view=li-lms-2025-02#step-1-apply-for-api-access).
Obtain a refresh token (https://learn.microsoft.com/en-us/linkedin/shared/authentication/programmatic-refresh-tokens#step-1-getting-a-refresh-token) using the authorization grant flow. Use 3-legged oAuth (https://learn.microsoft.com/en-us/linkedin/shared/authentication/authorization-code-flow?context=linkedin%2Fcontext&tabs=HTTPS1#how-to-implement-3-legged-oauth) and the
r_ads_reporting
scope.Obtain the client ID, client secret, and refresh token. These credentials are used to authenticate with the LinkedIn Ads API.
Set up Snowflake account¶
As a Snowflake account administrator, perform the following tasks:
Create a new role or use an existing role and grant the Database privileges.
Create a new Snowflake service user with the type as SERVICE.
Grant the Snowflake service user the role you created in the previous steps.
Configure with key-pair auth for the Snowflake SERVICE user from step 2.
Snowflake strongly recommends this step. Configure a secrets manager supported by Openflow, for example, AWS, Azure, and Hashicorp, and store the public and private keys in the secret store.
Note
If for any reason, you do not wish to use a secrets manager, then you are responsible for safeguarding the public key and private key files used for key-pair authentication according to the security policies of your organization.
Once the secrets manager is configured, determine how you will authenticate to it. On AWS, it’s recommended that you the EC2 instance role associated with Openflow as this way no other secrets have to be persisted.
In Openflow, configure a Parameter Provider associated with this Secrets Manager, from the hamburger menu in the upper right. Navigate to Controller Settings » Parameter Provider and then fetch your parameter values.
At this point all credentials can be referenced with the associated parameter paths and no sensitive values need to be persisted within Openflow.
If any other Snowflake users require access to the raw ingested documents and tables ingested by the connector (for example, for custom processing in Snowflake), then grant those users the role created in step 1.
Designate a warehouse for the connector to use. Start with the smallest warehouse size, then experiment with size depending on the number of tables being replicated, and the amount of data transferred. Large table numbers typically scale better with multi-cluster warehouses, rather than larger warehouse sizes.
Configure the connector¶
As a data engineer, perform the following tasks to configure a connector:
Create a database and schema in Snowflake for the connector to store ingested data.
Download the
connector definition file
.Import the connector definition into Openflow:
Open the Snowflake Openflow canvas.
To add a process group, drag the Process Group icon from the tool palette onto the canvas.
In the Create Process Group dialog, select the connector definition file to import.
Note
Each process group is responsible for fetching data for a single report configuration. To use multiple configurations on a regular schedule, create a separate process group for each report configuration.
Right-click on the imported process group and select Parameters.
Populate the required parameter values as described in Flow parameters.
Flow parameters¶
This section decribes the flow parameters that you can configure based on the following parameter contexts:
Config Linkedin connection: Used to establish connection with LinkedIn Ads API.
Config Snowflake connection: Used to establish connection with Snowflake.
- Flow LinkedIn to Snowflake: Contains all parameters from the other two parameter contexts and additional parameters specific to a given process group.
Because this parameter context contains ingestion-specific details, you must create new parameter contexts for each new report and process group.
Config Linkedin connection¶
Parameter |
Description |
---|---|
Client ID |
The client ID of an application registered on LinkedIn |
Client Secret |
The client secret related to the client ID |
Refresh Token |
A user obtains the refresh token after the app registration process. They use it together with the client ID and the client secret to get an access token. |
Token Endpoint |
The token endpoint is obtained by a user during the app registration process |
Config Snowflake connection¶
Parameter |
Description |
---|---|
Authentication Strategy |
Defines how the connector will connect to Snowflake. Use the value |
Snowflake Account Identifier |
The Snowflake account identifier where data retrieved from the LinkedIn Ads API is stored |
Snowflake User |
Username used to connect to the Snowflake instance |
Snowflake Private Key |
The RSA private key used for authentication. The RSA key must be formatted according to PKCS8 standards and have standard PEM headers and footers. Note that either Snowflake Private Key File or Snowflake Private Key must be defined. |
Snowflake Private Key File |
The file that contains the RSA private key used for authentication to Snowflake, which is formatted according to PKCS8 standards and has standard PEM headers and footers. The header line starts with |
Snowflake Private Key Password |
The password associated with the Snowflake Private Key File. |
Snowflake Role |
The Snowflake role with appropriate privileges on the destination database and schema (USAGE and CREATE TABLE). |
Flow LinkedIn to Snowflake¶
The following table lists parameters that are not inherited from the other parameter contexts:
Parameter |
Description |
---|---|
Report Name |
The unique name of the report. It is uppercased and used as the destination table name. |
Start Date |
Time granularity of results. Possible values:
|
Conversion Window |
The timeframe for which data is refreshed during incremental load when It must be specified only when DAILY time granularity is chosen.
For other possible time granularities, such as The conversion window can be any number from 1 to 365. |
Metrics |
List of comma-separated metrics. Metrics are case-sensitive. For more information, see Reporting (https://learn.microsoft.com/en-us/linkedin/marketing/integrations/ads-reporting/ads-reporting?view=li-lms-2025-03&tabs=http#metrics-available). The Up to 20 metrics can be specified, including the mandatory metrics. |
Pivots |
List of comma-separated pivots. The available pivots are as follows:
The connector uses the Analytics Finder when zero or one pivot is specified, and switches to the Statistics Finder when two or three pivots are selected. You can use a maximum of three pivots. |
Shares |
List of comma-separated share IDs. This parameter can be used to filter results by share ID. |
Campaigns |
List of comma-separated campaign IDs. This parameter can be used to filter results by campaign ID. |
Campaign Groups |
List of comma-separated campaign group IDs. This parameter can be used to filter results by campaign group ID. |
Accounts |
List of comma-separated account IDs. This parameter can be used to filter results by account ID. |
Companies |
List of comma-separated company IDs. This parameter can be used to filter results by company ID. |
Destination Database |
The destination database in which the destination table is created. It must be created by the user. |
Destination Schema |
The destination schema in which the destination table is created. It must be created by the user. |
Note
You must specify at least one of the filters, that is shares, campaigns, campaign groups, accounts, or companies.
Run the flow¶
Right-click on the plane and select Enable all Controller Services.
- Right-click on the imported process group and select Start.
The connector starts the data ingestion.