Fale com Suporte
voltar
Compartilhar:

Tutorial Qlik Replicate Demo – MySQL to Kafta


Streaming Data to Kafka

Step 1 – MySQL Source Configuration

For this use case, we will simply reuse the MySQL source endpoint that we created in the Database-to-Database use case. If you chose to skip the Database-to-Database use case, that is fine. Simply navigate to the instructions for creating the MySQL Source Configuration in the Database-to-Database use case and then return here to continue with this Kafka use case.

MySQL Source 5 Image

Feel free to Test Connection to ensure that everything is still OK with the MySQL source connection if you wish. To do this, click on Manage Endpoint Connections... and then select the MySQL source endpoint. From there you can Test Connection.

For more details about using MySQL as a source, please review the section “Using a MyQL-Based Database as a Source” in Chapter 8 “Adding and Managing Source Endpoints” of the Qlik Replicate User Guide

Streaming Data to Kafka

Step 2 – Kafka Target Configuration

Next we need to configure our Kafka target endpoint. The process is much the same as you saw with the previous endpoints, and once again you will note that the configuration process is context-sensitive as we move along.

As before, the first step in the configuration process is to tell Replicate that we want to create a new endpoint. If you are back on the main window, you will need to click on Manage Endpoint Connections button.

Manage Endpoints Image

and then press the + New Endpoint Connection button.

Manage Endpoints Image

and you will see a window that resembles this:

New Endpoint Image

We will now create a Kafka Target endpoint:

  • Replace the text New Endpoint Connection 1 with something more descriptive like Kafka-JSON or Kafka-Avro depending on the message format you intend to configure. If you are not sure at this point, a simple Kafka Target will do fine.
  • Make sure the Target radio button is selected.
  • Select Kafka from the dropdown selection box.

Configuring Replicate to Deliver JSON-Formatted Messages

If you want to deliver messages in JSON format, follow these steps.

Kafka Target 1j Image

Kafka Target 2j Image

Kafka Target 3j Image

Kafka Target 4j Image

Kafka Target 5j Image

Fill in the blanks as indicated in the images above:

  • Broker servers: kafka:29092
  • Security/Use SSL: NOT checked
  • Security/Authentication: None
  • Message Properties/Format: JSON
  • Message Properties/Compression: None
  • Data Message Publishing: Separate topic for each table
  • Data Message Publishing/Partition strategy: By message key
  • Data Message Publishing/Message key: Primary key columns
  • Metadata Message Publishing/Publish: Do not publish metadata messages
  • Metadata Message Publishing/Wrap data…: NOT checked

and then click on Test Connection. Your screen should look like the following, indicating that your connection succeeded.

Kafka Target 6j Image

Assuming so, click Save and the configuration of your Kafka target endpoint is complete. Click Close to close the window.

Configuring Replicate to Deliver Avro-Formatted Messages

If you want to deliver messages in Avro format, follow these steps.

Kafka Target 1a Image

Kafka Target 2a Image

Kafka Target 3a Image

Note above that when you select Avro as the message format there are options associated with whether or not to use an Avro feature called Logical Data Types and whether or not to encode the message key in Avro format. The remainder of the tutorial assumes that you will leave them unchecked. However, selecting these options will not adversely impact execution, so you may select them if you choose.

Kafka Target 4a Image

Kafka Target 5a Image

Fill in the blanks as indicated in the images above:

  • Broker servers: kafka:29092
  • Security/Use SSL: NOT checked
  • Security/Authentication: None
  • Message Properties/Format: Avro
  • Message Properties/Compression: None
  • Data Message Publishing: Separate topic for each table
  • Data Message Publishing/Partition strategy: By message key
  • Data Message Publishing/Message key: Primary key columns
  • Metadata Message Publishing/Publish: Publish data schemas to the Confluent Schema Registry
  • Schema Registry server(s): sreghost:8081
  • Authentication: None
  • Subject compatibility mode: Use Schema Registry defaults.

You may notice that Replicate supports the full suite of compatibility modes available in the Confluent Schema Registry. The tutorial does not involve schema evolution tasks, so the selection here is more or less moot.

and then click on Test Connection. Your screen should look like the following, indicating that your connection succeeded.

Kafka Target 6a Image

Assuming so, click Save and the configuration of your Kafka target endpoint is complete. Click Close to close the window.

For More Detailed Information

For more details about using Kafka as a target, please review the section “Using Kafka as a Target” in Chapter 9 “Adding and Managing Target Endpoints” of the Qlik Replicate User Guide

Streaming Data to Kafka

Step 3 – Configure Your Task

Now that we have configured our MySQL source and Kafka target endpoints, we need to tie them together in what we call a Replicate task. In short, a task defines the following:

  • A source endpoint
  • A target endpoint
  • The list of tables that we want to capture
  • Any transformations we want to make on the data

To get started, we need to create a task. Click on the + New Task button at the top of the screen.

Start Task 1 Image

Once you do, a window like this will pop up:

Start Task 2a Image

Give this task a meaningful name like MySQL to Kafka. For this task we will take the defaults:

  • Name: MySQL to Kafka
  • Unidirectional
  • Full Load: enabled (Blue highlight is enabled; click to enable / disable.)
  • Apply Changes: enabled (Blue highlight is enabled; click to enable / disable.)
  • Store Changes: disabled (Blue highlight is enabled; click to enable / disable.)

When you have everything set, press OK to create the task. Once you have completed this step you will see a window that looks like this:

Kafka Task 1 Image

Qlik Replicate is all about ease of use. The interface is point-and-click, drag-and-drop. To configure our task, we need to select a source endpoint (MySQL) and a target endpoint (Kafka). You can either drag the MySQL Source endpoint from the box on the left of the screen and drop it into the circle that says Drop source endpoint here, or you can click on the arrow that appears just to the right of the endpoint when you highlight it.

Kafka Task 2 Image

Kafka Task 3 Image

Repeat the same process for the Kafka-JSON or Kafka-Avro target endpoint that you created in the previous step. Your screen should now look something like this:

Kafka Task 4 Image

Our next step is to select the tables we want to replicate from MySQL into Kafka. Click on the Table Selection... button in the top center of your browser.

Kafka Task 5 Image

and from there select the testdrive schema.

Start Task 6 Image

Enter % where it says Table: and press the Search button. This will retrieve a list of all the tables in the testdrive schema.

Note: entering % is not strictly required. By default, Qlik Replicate will search for all tables (%) if you do not limit the search.

Start Task 7 Image

In the MySQL to Postgres task, we captured all of the tables in the schema. For this exercise, we will instead choose only a few tables.

  • testdrive.PitchingPost
  • testdrive.Player
  • testdrive.SeriesPost

Select each table from the Results list and press the > button to move them into the Selected Tables list. Note that multi-select is enabled. You can select the tables all at once, or move them individually.

Kafka Task 6 Image

At this point we could define transformations on the selected tables if we wanted, but we will keep it simple for this part of the test drive and move the data as is instead … so just push OK at the bottom of the screen.

Note: we did configure a transformation in the database-to-database section. You can refer to Configure a Transformation for more information if you skipped it.

Kafka Task 7 Image

In the event that you might choose to experiment with both JSON- and Avro-formatted messages, we will configure what Replicate calls a Global Transformation to rename the target schmea, which in the case of Kafka will rename the topics.

Kafka Task 7a Image

Select Global Transformations... at the top of the screen.

Kafka Task 7b Image

Give it a name if you wish, and select Rename schema and then press Next.

Kafka Task 7c Image

By default, this screen says “apply this rule to any schema.table” combination. That is good enough for us in this case, so take the default and press Next.

Kafka Task 7d Image

Here we are going to rename the schema from what it was called in the source database (“testdrive”) to something that will help us remember the message format we selected … so enter json or avro as appropriate here and then press Next.

Kafka Task 7e Image

Here we see a summary of the transformation we are creating. You can press Finish now.

Kafka Task 7f Image

From here you can press OK to return to the main screen.

That completes configuration of the task. We are now ready to save our task and run it. Press Save at the top left of the window and then press Run.

Kafka Task 8 Image

Streaming Data to Kafka

Step 4 – Run Your Task

When you press Run, Replicate will automatically switch from Designer mode to Monitor mode. You will be able to watch the status of the full load as it occurs, and then switch to monitoring change data capture as well.

Kafka Task 9 Image

After Full Load is complete, click on the Completed bar to display the tables. There is DML activity running in the background. Click on the Change Processing tab to see it in action.

Note: Changes to the tables occur somewhat randomly in the background. You may need to wait a few minutes before you will see changes appear in the tables that we selected.

Kafka Task 10 Image

If you would like to see some of the messages we are delivering to Kafka, click on the following link:

This link will open a tool that will allow you to browse the topics in the Kafka broker and display messages within those topics.

Kafkdrop Image 1

First you will need to log in:

  • User Name: admin
  • Password: eI3chl835qMRWt2F

Kafkdrop Image 2

On this screen, you get a hint as to why we renamed our target schemas. In this environment, we have delivered the same data from different tasks using both JSON- and Avro-formatted messages. Knowing which topics are which will be helpful as we browse message content.

Kafkdrop Image 3

As an example, select the Player topic (JSON or avro) depending on what you have configured and then select View Messages.

Depending on the message format you selected, you may need to change the configuration settings in the message browser for the topic appropriately.

Kafkdrop Image 4a

If your messages were delivered to the schema registry using Avro, then you will need to select Avro for the message format. If you opted to configure the key format as Avro as well, you will need to select the key format as Avro. Otherwise leave it as the default.

Kafkdrop Image 4j

If your messages were delivered as JSON payloads, then you can take the defaults.

Kafkdrop Image 5

In either case, you can move around the topic by changing the Offset and pressing View Messages.

Kafkdrop Image 6

You can expand a message to make it more readable by clicking on the green badge beside each message.

When you have seen enough, you can declare Victory! for this part of the Test Drive. Press Stop in the top left corner of the Replicate console to end the task. After pressing Stop and clicking Yes in the confirmation dialog, close the MySQL to Kafka tab or click on the TASKS tab to return to the main window.

Kafka Task 11 Image

Summary

You just:

  • Defined access and authentication into a MySQL source and a Kafka target
  • Defined the source tables from which you want to create Kafka messages
  • Configured the MySQL to Kafka task
  • Captured initial data from the source without while maintaining business continuity (DML activity was going on in the background to simulate users working on the source database)
  • Automatically created kafka messages from the initial table state
  • Captured all new transactions which were happening while the initial load was running
  • Turned all net new data into Kafka messages
  • Observed change data being recorded as it is sent to and applied at the target

All that in 4 easy steps!

You can now move on to the next part of the Test Drive.

Artigos relevantes

5 dicas para você realizar uma boa otimização do banco de dados

5 dicas para você realizar uma boa otimização do banco de dados


A criação de relatórios de pesquisa pode ser a ajuda certa se você busca uma gestão assertiva em sua organização.…

Leia mais
Você já utiliza a análise exploratória de dados? Confira exemplos e dicas para melhorar!

Você já utiliza a análise exploratória de dados? Confira exemplos e dicas para melhorar!


Você já utiliza a análise exploratória de dados? Confira exemplos e dicas para melhorar!    Texto:   A análise exploratória…

Leia mais
Conheça os Defeitos Resolvidos – Qlik Sense – Maio 2022

Conheça os Defeitos Resolvidos – Qlik Sense – Maio 2022


Defeitos Resolvidos Maio de 2022 Chave Título Descrição QB-5766 Qlik Cloud: A planilha com visualização dinâmica é “publicada” ao tentar…

Leia mais