R
ReshmiSriram
In today's data-driven world, processing and integrating real-time data from various sources is critical for businesses. This data regularly flows between multiple services for efficient ETL, data processing, data analytics and much more! Managing across multiple first and third-party services, orchestrating the data flow, making manual connections between each of these services via connection strings– these are all painstakingly manual processes, time-consuming, and pose a high risk of human errors during the copy-pasting.
Confluent Connectors manage the movement of the data between your and your sink (i.e. destination) SaaS services – ensuring high throughput and low latency Kafka streaming. We have now enhanced the Confluent Connector management experience between your first-party Azure Services and Confluent Cloud using Apache Kafka® & Apache Flink® on Confluent Cloud – An Azure Native ISV Service. This preview feature will auto-populate connection details as dropdowns and thus help bring seamless and error-free end-to-end create experience – all while staying on Azure portal! Today, we announce the preview support for Azure Blob Storage source and sink connectors via Azure portal.
A Confluent Organization consists of Environments, Clusters and Topics nested one within the other. Connectors are nested under clusters and stream data in the form of Messages. These messages are stored within feeds called Topics. The Confluent resource hierarchy is as follows:
You can now move data between your Kafka and Azure Blob storage using Confluent Connectors with a few simple steps!
Note: To create Confluent Connectors, you must have your Organization, Environment, Clusters and Topics pre-configured. If you haven't created a Confluent organization on Azure yet, check out this doc for detailed guidance. Hurry up, first timers also get a free $1000 credit to try it out!
Fig 3. Confluent Overview page
Fig 4. Connector tiles on Azure
Fig 5. Create a new Confluent Connector
We have some interesting additions to this preview feature planned for rollout soon!
Continue reading...
Confluent Connectors
Confluent Connectors manage the movement of the data between your and your sink (i.e. destination) SaaS services – ensuring high throughput and low latency Kafka streaming. We have now enhanced the Confluent Connector management experience between your first-party Azure Services and Confluent Cloud using Apache Kafka® & Apache Flink® on Confluent Cloud – An Azure Native ISV Service. This preview feature will auto-populate connection details as dropdowns and thus help bring seamless and error-free end-to-end create experience – all while staying on Azure portal! Today, we announce the preview support for Azure Blob Storage source and sink connectors via Azure portal.
Fig 1. Confluent Connectors on Azure
Confluent Resource Hierarchy
A Confluent Organization consists of Environments, Clusters and Topics nested one within the other. Connectors are nested under clusters and stream data in the form of Messages. These messages are stored within feeds called Topics. The Confluent resource hierarchy is as follows:
How to create a Confluent Connector on Azure (Preview)
You can now move data between your Kafka and Azure Blob storage using Confluent Connectors with a few simple steps!
Note: To create Confluent Connectors, you must have your Organization, Environment, Clusters and Topics pre-configured. If you haven't created a Confluent organization on Azure yet, check out this doc for detailed guidance. Hurry up, first timers also get a free $1000 credit to try it out!
- From the Overview page of your Confluent organization, go to Data streaming and click on Connectors under the same. Alternatively, you can also click on the Configure connectors button on the bottom half of the overview page.
Fig 3. Confluent Overview page
- Select your environment and cluster from the dropdown menu. You will be able to see the list of Azure connectors you have created within this cluster.
- You can also filter the connectors on the type (Source or Sink) and the Status of the connector.
Fig 4. Connector tiles on Azure
- Click on Create a connector. On the right pop-up blade, you can select between Source and Sink connectors, the plugin service you would like to connect with, and auto-fetch your Azure application details via dropdowns.
Fig 5. Create a new Confluent Connector
- Complete the Configuration tab for setting up the data formats, number of tasks, etc. Review your selections and then click on Create. Post a couple of minutes, you will get notified that the connector is up and ready!
- Clicking on the new connector tile will redirect you to the Confluent UI where you can see the Connector health, throughput and other stats.
What next?
We have some interesting additions to this preview feature planned for rollout soon!
- Connector support for additional Azure data services such as ADLS Gen2, EventHub Source, Synapse, Functions, and many more!
- Confluent Connector experience across Azure portal, SDK and CLI.
Resources
- Try out the Apache Kafka® & Apache Flink® on Confluent Cloud – An Azure Native ISV Service offering right away! Every new sign-up gets a free $1000 credit!
- To learn more about this preview feature, check out the Microsoft Docs.
- Check out the Confluent blog announcing this preview feature.
- If you would like to give us feedback on this feature, the overall product or have any suggestions for us to work on, please drop in your suggestions here!!
Continue reading...