Message queues for the DBA: sending data out into the world

Wait, is this about Service Broker?

When a DBA hears the words “message” and “queues” chances are our thoughts first go out to Service Broker. If you’re talking about undervalued technologies in SQL Server, this one would darn near be at the top of my list. It can be used for a lot of things, but traditionally it was designed to send messages between SQL Servers. A classic example is order processing: an order is entered on a system and the data written to SQL Server. Some (or all) of that data is crafted into a message that is dispatched to another server that handles a different function, like a payment processing system. All payment processing is done on this remote system, and when that piece finishes, the system dispatches another message back to the main system to tell the order it has been paid for. Maybe that event causes another message to be dispatched to a fulfillment system, which will respond with another message once the order has been packed and shipped.

Service broker is a technology inside SQL Server that enables these messages be exchanged… between SQL Servers. Service Broker is a complex topic, and if you’re new to messaging it can be a little overwhelming. I’m not going to spend too much time on Service Broker in this post, although what I’m going to show DOES use it. Instead, I’m going to focus on other message queuing systems, and why they matter to you as a DBA.

Using message queues

Queuing and messaging systems have been around for a long time; longer than Service Broker. There’s the old classics like MSMQ, and newer, more robust solutions like RabbitMQ. They work like this: an application writes a message to the queue. The queue is a first in, first out model: things that go in first usually get processed first. While applications write to the queues, there are listeners that watch the queue for items. A listener will take one (or more) items out of the queue and act on them. This might be doing some sort of calculation, or maybe writing data to database like SQL Server.

So why should you care about message queues? Remember how I said Service Broker is a great messaging system between SQL Servers? Well, unfortunately the world doesn’t (yet) run on SQL Server alone. Sometimes changes in our databases might have downstream changes that other applications or processes need to know about. These might be other database systems, web applications, or back-end systems. Speed might also be a factor; the sooner these systems know about new or changing data, the better.

As a developer-turned-DBA, I love queues. It means applications don’t have to hit a database directly to do something, and instead services and listeners become the primary consumers and actors on databases. Queues also let me send data places without those places needing access to my databases, directly. I can’t write an insert in SQL Server that writes my data directly to, say, PostgreSQL database, but I can make it write to a message queue that an application then picks the message from, and then uses the data contained in the message to write to PostgreSQL.

It’s very helpful to know what queues are used for. Maybe your shop already uses queues and you’re curious how they work. Hopefully this will give you a better idea. The question then becomes: can we put SQL Server transactions into a remote queue for processing?

The answer is yes. Let’s look at a sample implementation.

“Happy Birthday!”

Say we work at a company that processes email birthday greetings. We have an application that takes in a person’s first name, last name, date of birth, and email address. Once we get this data in, we need to alert a processing system to send the user an email saying “Happy birthday!” Unless you want to leverage database mail to send all these alerts, you need a way to get the data to the processing system to send them out as they come in. Here’s how we’ll do it:

  1. Set up a trigger on the table that captures new rows,
  2. The trigger queues a message to a local service broker,
  3. A listener application runs and watches for new messages in service broker, and plucks them out. It writes a message to a remote queue, and then,
  4. A second listener watches this queue for new additions, and when it sees one, sends an email.

Got it? Here’s a diagram showing how it would work:

In this example scenario, any time a new row is added to the dbo.users table, a trigger would write a new message to our service broker queue on the SQL Server. Messages would queue up there if left to their own devices, so we have a listener application (usually a service of some kind) watching for new messages in the service broker queue. It picks up the messages with a RECEIVE command, and takes the message contents and writes it to an external queue. A second listener application watches this queue, and this service would be responsible for sending out our email.

I drew a dashed box around the SQL Server part. The other pieces, like each listener, the message queue, and even the process that sends a message could be on a different host, or maybe even up in the cloud.

This type of architecture might sound confusing at first. To help you, let’s build a quick proof of concept. I’ll walk you through the steps.


Please note, there’s one thing I need to make super abundantly clear for this demo: You’d never, ever configure these components like this for production. There’s so much more to consider, like setting up RabbitMQ to use SSL, writing actual applications and services instead of PowerShell to handle message listeners, and putting more effort into Service Broker. This should serve fine as a proof of concept, but if you want to actually implement something like this, make sure you do you research and follow best practices for each component.

To set everything up, there’s a couple things you’ll need ahead of time. You can either construct this demo on your local machine or in a VM, that’s up to you. Here’s everything you’ll want ahead of time:

  1. A running instance of SQL Server that you can create a test database on. Any version and edition that’s 2008 R2 or newer will work.
  2. We’re going to use RabbitMQ as our message queuing system. RabbitMQ is like MSMQ, but good. It’s also free. You’ll want to download and install it with all the defaults. It has a dependency on Erlang version 19 (don’t download version 20, it isn’t compatible yet with RabbitMQ yet). Install Erlang first (run as administrator) and then install RabbitMQ.
  3. Once RabbitMQ is installed, you can install the management pack to get a nifty web interface. Open a command prompt as administrator, and run the following command:
    cd C:\Program Files\RabbitMQ Server\rabbitmq_server-3.6.10\sbin
    rabbitmq-plugins enable rabbitmq_management

    Once this completes, you should be able to open a web browser and point it to http://localhost:15672/ The default login is guest password guest. If everything worked, you should see a screen like this:

    Congratulations, RabbitMQ (and the management administration tools) are installed and ready to use! We’ll configure it for our demo in a bit.

  4. We’ll be using PowerShell to serve as our demo listeners. One of the reasons RabbitMQ is so good is that pretty much every language out there can talk to it. There’s a PowerShell module built for RabbitMQ that you should download. You’ll need to save the code to a folder somewhere (I’ll be using C:\QueueDemo), and use PowerShell to unblock the files (you can just run Get-ChildItem -recurse | Unblock-File in the directory you save the module files in as administrator to do this; see here as to why).

Once this is all done, we’ll need to do some configuration. First we’ll make sure SQL Server can capture changes and write them the local service broker, and then we’ll set up RabbitMQ to handle messages.

Configuring SQL Server and Service Broker

First things first, we’ll take care setting up our database and service broker. Our goal is to create a service broker queue that logs a message every time new rows appear our dbo.Person table. There’s going to be two parts to this: The trigger that fires when new rows are added to the table, and our Service Broker components (queue, message types, contracts, etc). I’m purposely glossing over A LOT of Service Broker here for this demo. Getting deep into Service Broker is not just a blog post unto itself, it could easily be a whole series. Here’s all you really need to know for now: Service Broker is going log messages locally, to be read by our listener and taken out once they do.

Here’s a quick SQL Script to set up everything we need, including creating a new database for the demo and turning on Service Broker.

use [master]



USE [QueueExample]

CREATE TABLE [dbo].[People]
    FirstName nvarchar(25) not null,
    LastName nvarchar(50) not null,
    EmailAddress nvarchar(255) not null,
    DOB datetime2 not null

CREATE SERVICE NewPersonService ON QUEUE NewPersonQueue (NewPersonContract)

CREATE TRIGGER dbo.People_Insert ON dbo.People

    IF (EXISTS(SELECT 1 FROM inserted))

        SELECT ID, FirstName, LastName, EmailAddress, DOB 
        INTO #NewPeople
        FROM inserted
        WHERE ID is not null

        WHILE (EXISTS(SELECT 1 FROM #NewPeople))
            DECLARE @message varchar(max)

            WITH readable AS (
            SELECT TOP 50000
                '<NewPerson><Id>' + CAST(ID as nvarchar(max)) + '</Id><FirstName>' + CAST(FirstName as varchar(25)) + '</FirstName><LastName>' + CAST(LastName as varchar(50)) + '</LastName><EmailAddress>' + CAST(EmailAddress as varchar(255)) + '</EmailAddress><DOB>' + CAST(DOB as varchar(50)) + '</DOB><Occured>' + convert(varchar(50), getdate(), 126) + '</Occured></NewPerson>' AS 'data()'
                FROM #NewPeople ORDER BY ID
                FOR XML PATH(''), TYPE
            ) AS outputxml
            SELECT @message = '<?xml version="1.0"?><PersonChange xmlns:xsi="" xmlns:xsd="">' + readable.outputxml.value('.','VARCHAR(MAX)') + '</PersonChange>'
            FROM readable
            DECLARE @handle AS uniqueidentifier
            FROM SERVICE [NewPersonService]
            TO SERVICE 'NewPersonService'
            ON CONTRACT NewPersonContract
            WITH ENCRYPTION = OFF;
            SEND ON CONVERSATION @handle
            MESSAGE TYPE NewPerson (@message)
            DELETE FROM #NewPeople WHERE ID in (SELECT TOP 50000 ID FROM #NewPeople)

One quick note on the trigger: yes, it’s using XML. Our messages in this example will be written as well-formed XML, so our trigger can process up to 50,000 new users in a single message, concatenates them into a XML document, then writes the document to the message queue. The highlighted lines above are where we’re actually taking the message we craft and are pushing it into the service broker queue.

When you’re all said and done, here’s how your database should look. I expanded all the objects.

Configuring RabbitMQ

Now that we have our database configured and ready to go, we’ll need to set up RabbitMQ to process our messages. If you’re new to queues and messages, this part can be pretty confusing. To make all this work, we need some key things set up:

  1. We need a message queue. The queue is ultimately where messages will end up, and where we’ll process things as they come in. A message queue is like a mailbox.
  2. We need a message exchange. In RabbitMQ a message exchange is like a post office. When we send messages, we don’t tell it explicitly which queue to send them to, we tell them which exchange to send them to.
  3. When we create our message exchange, we need to bind it to a queue. The common thing that binds them together is called a routing key. A routing key is like the address of the mailbox (queue). In order for our messages to get to the right place, we need to know which exchange to send it to, and which routing key to pass it.

There’s two ways to set all this up. One, you can use the web interface you logged into above, or two, you can use PowerShell! Creating the required things in the UI is pretty self-explanatory, but in case you’re worried, you can just load up this script in the PowerShell ISE.

#Either save this file and run it where you saved the RabbitMQ powershell module,
#Or open it in the ISE, change to the directory where your module is saved, and run it that way.
#Of course, you could put the module in a module path where you could just import it normally.
#I'm not your mother, do what you want

Import-Module ./RabbitMQTools.psd1

#When prompted, for now, use the default admin username guest, default password guest
$Credential = Get-Credential

$ExchangeName = "PeopleExchange"
$QueueName = "NewPeople"
$RoutingKey = "RouteKey"

Add-RabbitMQExchange -ComputerName localhost -Credential $Credential -name $ExchangeName -Type direct -Durable

Add-RabbitMQQueue -ComputerName localhost -Credential $Credential -Name $QueueName -Durable -VirtualHost /

Add-RabbitMQQueueBinding -ComputerName localhost -Credential $Credential -ExchangeName $ExchangeName -Name $QueueName -RoutingKey $RoutingKey -VirtualHost /

Save this file to the same directory you saved the RabbitMQ PowerShell module, then try running it. If everything works, in your web UI you should see an exchange (with a binding and route key) and a new queue. We’re done configuring!

Putting it all together

Great, so now all the plumbing is wired up. Let’s put this thing into practice. To show off how a system like this might work, we’re going to use PowerShell to create listeners on our Service Broker queue and RabbitMQ queue. That means we need two scripts, which are below. If you want to follow along with my demos, name them as I have them written in the comment block at top of each. Again, you’ll want to save these into the same folder where you saved all your other files (for me, it’s C:\QueueDemo).

param (
    [Parameter(Mandatory=$false)] [string] $rabbitMQComputer = "localhost",
    [Parameter(Mandatory=$true)] [string] $QueueName,
    [Parameter(Mandatory=$true)] [System.Management.Automation.CredentialAttribute()] $Credential
#This script will listen to the RabbitMQ instance, and watch for messages coming in.
#When it detects them, it will read them in order and output them to the screen

#Is the RabbitMQ Module Loaded?
if ((Get-Module RabbitMQTools) -eq $false)
    throw "RabbitMQ module not loaded. Please load it before continuing."

$stopWatch = New-Object System.Diagnostics.Stopwatch
$timeouts = 0

while ($true)
    #Grab a message
    $IncomingMessage = Get-RabbitMQMessage -Name $QueueName -ComputerName $rabbitMQComputer -Count 1 -Credential $Credential -Remove -VirtualHost /
    if ($IncomingMessage) {
        $Data = $IncomingMessage.Payload | ConvertFrom-Json
        Write-Host ("New user " + $Data.FirstName + " " + $Data.LastName + " with and email address of " + $Data.EmailAddress + " and a DOB of " + $Data.DOB + " was created on " + $Data.Created)
        $timeouts = 0
    } else {
        $elapsed = [math]::floor($stopWatch.ElapsedMilliseconds / 1000)
        Start-Sleep 5
        if ($elapsed % 30 -eq 0 -and $elapsed -gt 0) {
            $timeouts += 30
            Write-Warning "No messages received in the last $timeouts seconds..."
param (
    [Parameter(Mandatory=$true)] [string] $sqlserver,
    [Parameter(Mandatory=$true)] [string] $database,
    [Parameter(Mandatory=$true)] [string] $servicebrokerqueue,
    [Parameter(Mandatory=$false)] [string] $rabbitMQComputer = "localhost",
    [Parameter(Mandatory=$true)] [string] $rabbitMQExchange,
    [Parameter(Mandatory=$true)] [string] $rabbitMQRoutingKey,
    [Parameter(Mandatory=$true)] [System.Management.Automation.CredentialAttribute()] $Credential
#This script will watch the service broker queue on the SQL Server
#When new messages arrive, it will parse the XML and add 1 message per new user

#Is the SQL Server Module Loaded?
if ((Get-Module sqlps) -eq $null -and (Get-Module sqlserver) -eq $null)
    Throw "SQLPS/SQLSERVER module not loaded. Please load it before continuing."

#Is the RabbitMQ Module Loaded?
if ((Get-Module RabbitMQTools) -eq $false)
    throw "RabbitMQ module not loaded. Please load it before continuing."

while ($true)
    #Grab a message
    $servicebrokermessage = Invoke-Sqlcmd -ServerInstance $sqlserver -Database $database -Query "RECEIVE TOP(1) CAST(message_body AS VARCHAR(MAX)) AS message_body FROM dbo.NewPersonQueue"
    $xml = [xml] $servicebrokermessage.message_body
    ForEach ($np in $xml.PersonChange.NewPerson) {
        #Parse it        
        $NewPerson = [PSCustomObject] @{
            ID = $
            FirstName = $np.FirstName
            LastName = $np.LastName
            EmailAddress = $np.EmailAddress
            DOB = $np.DOB
            Created = $np.Occured
        Write-Verbose "Received message from service broker"
        $Message = $NewPerson | ConvertTo-JSON
        #Pop it into rabbitmq
        Write-Verbose "Sending message to RabbitMQ..."
        Add-RabbitMQMessage -ComputerName $rabbitMQComputer -Credential $Credential -ExchangeName $rabbitMQExchange -RoutingKey $rabbitMQRoutingKey -Payload $Message -VirtualHost /

    #Wait 2 seconds
    Start-Sleep -Seconds 2

Now we can have some fun! Once you have these files saved, open two new PowerShell sessions. In the first window, run the Start-SQLListener.ps1 file and in the other, run the ./Start-QueueListener.ps1 file (don’t forget to import the RabbitMQ module in both sessions, import the SQL Server PowerShell module in the SQL listener window, and make sure you get a credential first; you can keep using the guest/guest username and password you used before). The first file will be responsible for watching the Service Broker queue, receiving a message, and writing it to RabbitMQ. The second window will be responsible for watching RabbitMQ for new messages, and then acting on them. In the real world, all of these things probably wouldn’t all be on the same machine; they might not even all be in the same data center.

If everything is up and running, you should see something like this.

The queue listener has a little warning code I wrote that spits out if there’s any long periods of it not seeing any messages (which can be helpful if something isn’t configured right).

Now it’s time for the main event: we’re going to actually write a row to our table. You can either fire up management studio and insert a record manually, or you can use Invoke-SQLCmd to insert a new record to dbo.People. Once you do, you should see activity: the SQL Listener script will detect the new message in the Service Broker Queue, and alert you that it’s sending the message on to RabbitMQ. The queue listener will then perk up, saying it received the message, and output the data to the screen.

Here it is in action:

Pretty slick, right? I thought so too. So there you have it: a quick and exciting way to use SQL Server Service Broker to get messages to an external queue, and how you can test it out yourself.

Going further

So there’s a quick and dirty example of how message queues can work for you and your databases. Putting it into practice in the real world can be a lot more complicated, with different types of applications using different types of messages, lots of exchanges, with multiple queues each, or fanout exchanges that send messages to multiple queues. You can also do routing rules for different messages based on a multitude of criteria. That’s why I love RabbitMQ so much.

This just scratches the surface of what message queues and SQL Server can do together. In a future blog post, I’ll be conducting a shoot-out of using service broker and queues against technologies like SQL Server replication to see which one is faster.

What do you think? Do you see value in queues? Or are you already using them today? Either way, I’d love to hear your feedback.

One thought on “Message queues for the DBA: sending data out into the world

  1. Sander Stad

    Thank you for the post. I’ve been playing with the idea to implement message brokers in our current system and have done several projects with BizTalk. This will probably help me.

Comments are closed.