Reusing Conversations

April 25th, 2007

One of the most common deployed patterns of using Service Broker is what I would call ‘data push’, when Service Broker conversations are used to send data one way only (from initiator to target). In this pattern the target never sends any message back to the initiator. Common applications for this pattern are:

· ETL from the transactions system to the data warehouse

· audit and logging

· aggregation of data from multiple sites

Implementing this pattern is usually very trivial: modify a stored procedure to SEND a message (a datagram) or add a trigger on a table to issue the SEND. But as soon as this is done, the next question that arises is: do I start a new conversation for each message or do I try to reuse conversations? My answer would be that reusing conversations for such patterns (one way data push) is highly recommended.

Here are some of my arguments for reusing conversations:

· conversations are the only elements that are guaranteed to preserve order in transmission (even locally!). E.g. if the order of the events audited is important, they have to be sent on the same conversation

· reusing conversations has a significant performance impact (see

o the cost of setting up and tearing down a conversation for each message can influence the performance by ~4x

o the advantage on having multiple messages on the same conversation can speed up processing on the RECEIVE side by a factor of ~10x

  • conversations are the ‘unit of work’ for the Service Broker background and as such resources are allocated in the system according to the number of active conversations (‘active’ = have some activity pending, like sending a message or replying with an acknowledgement). The number of pending messages has less influence on resource allocation (2 dialogs with 500 messages each will allocate far less resources than 1000 dialogs with 1 message each)

The next step is to decide just how many conversations to use. One conversation would completely preserve order of operations, but because each SEND locks the conversation exclusively, would serialize every transaction and lead to serious scalability problems. My favorite scheme is to have one conversation per user connection. This would preserve the order of actions on one connection, which is most often the desired granularity, and will scale up easily because is very unusual for connections to share transactions (just how often is it that you use sp_getbindtoken?).

Next is an example stored proc that reuses conversations based on the conversation arguments (from service, to service, contract) and current connection ID (@@SPID). It’s based on a table to store the conversation associated with each @@SPID.

-- This table associates the current connection (@@SPID)
-- with a conversation. The association key also
-- contains the conversation parameteres:
-- from service, to service, contract used
CREATE TABLE [SessionConversations] (
PRIMARY KEY (SPID, FromService, ToService, OnContract),
UNIQUE (Handle));

-- SEND procedure. Will lookup to reuse an existing conversation
-- or start a new in case no conversation exists or the conversation
-- cannot be used
@fromService SYSNAME,
@toService SYSNAME,
@onContract SYSNAME,
@messageType SYSNAME,
@messageBody VARBINARY(MAX))
DECLARE @counter INT;
SELECT @counter = 1;
-- Will need a loop to retry in case the conversation is
-- in a state that does not allow transmission
WHILE (1=1)
-- Seek an eligible conversation in [SessionConversations]
SELECT @handle = Handle
FROM [SessionConversations]
AND FromService = @fromService
AND ToService = @toService
AND OnContract = @OnContract;
IF @handle IS NULL
-- Need to start a new conversation for the current @@spid
FROM SERVICE @fromService
TO SERVICE @toService
ON CONTRACT @onContract
INSERT INTO [SessionConversations]
(SPID, FromService, ToService, OnContract, Handle)
(@@SPID, @fromService, @toService, @onContract, @handle);
-- Attempt to SEND on the associated conversation
MESSAGE TYPE @messageType
SELECT @error = @@ERROR;
IF @error = 0
-- Successful send, just exit the loop
SELECT @counter = @counter+1;
IF @counter > 10
-- We failed 10 times in a row, something must be broken
N'Failed to SEND on a conversation for more than 10 times. Error %i.'
, 16, 1, @error) WITH LOG;
-- Delete the associated conversation from the table and try again
DELETE FROM [SessionConversations]
WHERE Handle = @handle;
SELECT @handle = NULL;

-- Test the procedure with some dummy paramaters. 
-- The FromService ‘ST’ must already exist in the database
exec usp_Send N'ST', N'Foobar', N'DEFAULT', N'DEFAULT', 0x01020304;

This shows how to reuse the conversations, but there are still plenty of details one has to think of, like:

I’ll try to visit these topics in a future post.

5 responses to “Reusing Conversations”

  1. […] my previous post Reusing Conversations I promised I’ll follow up with a solution to the question about how to end the conversations that […]

  2. […] of my two articles on using Service Broker as a pure data-push one-way communication channel: Reusing Conversations and Recycling Conversations. I originally did not plan for this third part, but I was more than […]

  3. […] The cost of creating and closing a new conversation for every message is about 4x, while the performance increase when receiving messages is about 10x.  Remus Rusanu talks more about the numbers and shows a solution for reusing conversations on his blog posting Reusing Conversations. […]

  4. […] for a more flexible policy on conversation reuse replacing the hard coded @@SPID affinity I used in my original post with an application specific settings […]

  5. […] to exchange the messages from one queue to another. This is probably not always necessary but it is recommended to significantly improve […]