5 dicas para você realizar uma boa otimização do banco de dados
A criação de relatórios de pesquisa pode ser a ajuda certa se você busca uma gestão assertiva em sua organização.…
For this use case, we will simply reuse the MySQL source endpoint that we created in the Database-to-Database use case. If you chose to skip the Database-to-Database use case, that is fine. Simply navigate to the instructions for creating the MySQL Source Configuration in the Database-to-Database use case and then return here to continue with this Kafka use case.
Feel free to Test Connection
to ensure that everything is still OK with the MySQL source connection if you wish. To do this, click on Manage Endpoint Connections...
and then select the MySQL source endpoint. From there you can Test Connection
.
For more details about using MySQL as a source, please review the section “Using a MyQL-Based Database as a Source” in Chapter 8 “Adding and Managing Source Endpoints” of the Qlik Replicate User Guide
Next we need to configure our Kafka target endpoint. The process is much the same as you saw with the previous endpoints, and once again you will note that the configuration process is context-sensitive as we move along.
As before, the first step in the configuration process is to tell Replicate that we want to create a new endpoint. If you are back on the main window, you will need to click on Manage Endpoint Connections
button.
and then press the + New Endpoint Connection
button.
and you will see a window that resembles this:
We will now create a Kafka Target endpoint:
Kafka-JSON
or Kafka-Avro
depending on the message format you intend to configure. If you are not sure at this point, a simple Kafka Target
will do fine.Target
radio button is selected.Kafka
from the dropdown selection box.If you want to deliver messages in JSON format, follow these steps.
Fill in the blanks as indicated in the images above:
kafka:29092
NOT checked
None
JSON
None
Separate topic for each table
By message key
Primary key columns
Do not publish metadata messages
NOT checked
and then click on Test Connection
. Your screen should look like the following, indicating that your connection succeeded.
Assuming so, click Save
and the configuration of your Kafka target endpoint is complete. Click Close
to close the window.
If you want to deliver messages in Avro format, follow these steps.
Note above that when you select
Avro
as the message format there are options associated with whether or not to use an Avro feature called Logical Data Types and whether or not to encode the message key in Avro format. The remainder of the tutorial assumes that you will leave them unchecked. However, selecting these options will not adversely impact execution, so you may select them if you choose.
Fill in the blanks as indicated in the images above:
kafka:29092
NOT checked
None
Avro
None
Separate topic for each table
By message key
Primary key columns
Publish data schemas to the Confluent Schema Registry
sreghost:8081
None
Use Schema Registry defaults
.You may notice that Replicate supports the full suite of compatibility modes available in the Confluent Schema Registry. The tutorial does not involve schema evolution tasks, so the selection here is more or less moot.
and then click on Test Connection
. Your screen should look like the following, indicating that your connection succeeded.
Assuming so, click Save
and the configuration of your Kafka target endpoint is complete. Click Close
to close the window.
For more details about using Kafka as a target, please review the section “Using Kafka as a Target” in Chapter 9 “Adding and Managing Target Endpoints” of the Qlik Replicate User Guide
Now that we have configured our MySQL source and Kafka target endpoints, we need to tie them together in what we call a Replicate task. In short, a task defines the following:
To get started, we need to create a task. Click on the + New Task
button at the top of the screen.
Once you do, a window like this will pop up:
Give this task a meaningful name like MySQL to Kafka
. For this task we will take the defaults:
MySQL to Kafka
Unidirectional
enabled
(Blue highlight is enabled; click to enable / disable.)enabled
(Blue highlight is enabled; click to enable / disable.)disabled
(Blue highlight is enabled; click to enable / disable.)When you have everything set, press OK
to create the task. Once you have completed this step you will see a window that looks like this:
Qlik Replicate is all about ease of use. The interface is point-and-click, drag-and-drop. To configure our task, we need to select a source endpoint (MySQL) and a target endpoint (Kafka). You can either drag the MySQL Source
endpoint from the box on the left of the screen and drop it into the circle that says Drop source endpoint here
, or you can click on the arrow that appears just to the right of the endpoint when you highlight it.
Repeat the same process for the Kafka-JSON or Kafka-Avro target endpoint that you created in the previous step. Your screen should now look something like this:
Our next step is to select the tables we want to replicate from MySQL into Kafka. Click on the Table Selection...
button in the top center of your browser.
and from there select the testdrive
schema.
Enter %
where it says Table: and press the Search
button. This will retrieve a list of all the tables in the testdrive schema.
Note: entering
%
is not strictly required. By default, Qlik Replicate will search for all tables (%
) if you do not limit the search.
In the MySQL to Postgres task, we captured all of the tables in the schema. For this exercise, we will instead choose only a few tables.
Select each table from the Results list and press the >
button to move them into the Selected Tables list. Note that multi-select is enabled. You can select the tables all at once, or move them individually.
At this point we could define transformations on the selected tables if we wanted, but we will keep it simple for this part of the test drive and move the data as is instead … so just push OK
at the bottom of the screen.
Note: we did configure a transformation in the database-to-database section. You can refer to Configure a Transformation for more information if you skipped it.
In the event that you might choose to experiment with both JSON- and Avro-formatted messages, we will configure what Replicate calls a Global Transformation to rename the target schmea, which in the case of Kafka will rename the topics.
Select Global Transformations...
at the top of the screen.
Give it a name if you wish, and select Rename schema
and then press Next
.
By default, this screen says “apply this rule to any schema.table” combination. That is good enough for us in this case, so take the default and press Next
.
Here we are going to rename the schema from what it was called in the source database (“testdrive”) to something that will help us remember the message format we selected … so enter json
or avro
as appropriate here and then press Next
.
Here we see a summary of the transformation we are creating. You can press Finish
now.
From here you can press OK
to return to the main screen.
That completes configuration of the task. We are now ready to save our task and run it. Press Save
at the top left of the window and then press Run
.
When you press Run
, Replicate will automatically switch from Designer mode to Monitor mode. You will be able to watch the status of the full load as it occurs, and then switch to monitoring change data capture as well.
After Full Load is complete, click on the Completed
bar to display the tables. There is DML activity running in the background. Click on the Change Processing
tab to see it in action.
Note: Changes to the tables occur somewhat randomly in the background. You may need to wait a few minutes before you will see changes appear in the tables that we selected.
If you would like to see some of the messages we are delivering to Kafka, click on the following link:
This link will open a tool that will allow you to browse the topics in the Kafka broker and display messages within those topics.
First you will need to log in:
admin
eI3chl835qMRWt2F
On this screen, you get a hint as to why we renamed our target schemas. In this environment, we have delivered the same data from different tasks using both JSON- and Avro-formatted messages. Knowing which topics are which will be helpful as we browse message content.
As an example, select the Player
topic (JSON or avro) depending on what you have configured and then select View Messages
.
Depending on the message format you selected, you may need to change the configuration settings in the message browser for the topic appropriately.
If your messages were delivered to the schema registry using Avro, then you will need to select Avro
for the message format. If you opted to configure the key format as Avro as well, you will need to select the key format as Avro. Otherwise leave it as the default.
If your messages were delivered as JSON payloads, then you can take the defaults.
In either case, you can move around the topic by changing the Offset
and pressing View Messages
.
You can expand a message to make it more readable by clicking on the green badge beside each message.
When you have seen enough, you can declare Victory! for this part of the Test Drive. Press Stop
in the top left corner of the Replicate console to end the task. After pressing Stop
and clicking Yes
in the confirmation dialog, close the MySQL to Kafka tab or click on the TASKS
tab to return to the main window.
You just:
All that in 4 easy steps!
You can now move on to the next part of the Test Drive.
A criação de relatórios de pesquisa pode ser a ajuda certa se você busca uma gestão assertiva em sua organização.…
Você já utiliza a análise exploratória de dados? Confira exemplos e dicas para melhorar! Texto: A análise exploratória…
Defeitos Resolvidos Maio de 2022 Chave Título Descrição QB-5766 Qlik Cloud: A planilha com visualização dinâmica é “publicada” ao tentar…
Veja vagas abertas para nossos times, ou nos envie seu currículo pelo banco de talentos.
Calcule a sua maturidade em dados