Getting started with MQTT and Sparkplug

Eclipse SparkplugTM is a specification for increasing MQTT interoperability by defining topic and payload contents, and the interaction of devices and monitoring applications. It is being standardized at the Eclipse Foundation. Sparkplug was intended for industrial IoT applications but there is growing interest outside that application area.

There are many resources for Sparkplug which you can find by searching for “MQTT Sparkplug”, but here are a few we’ll be using:

The intent of this article is to get you up and running with a working Sparkplug implementation quickly so you can see it in action, not just read the specification. If you do want to read the specification, go to the Sparkplug Github repository, checkout the develop branch and build it according to the README (basically gradlew build). If you want to get a high-level view of Sparkplug first, go to the Conclusion of this article for more links.

We need at least three components in a working Sparkplug setup:

  1. An MQTT server (or broker)
  2. A Sparkplug Host Application (server) which can monitor and control edge nodes and devices
  3. Sparkplug client side implementations which can act as Sparkplug edge nodes and devices

A Sparkplug Device is the network end point for data to be obtained from, and commands sent to. In an industrial IoT environment, such a device may be attached to a PLC by one of a variety of industrial protocols, but probably not MQTT. The PLC can act as a Sparkplug Edge Node, which is a contentrator passing messages to and from devices attached to it. The Edge Node is also a device in that it can also have data sent from it and commands sent to it. The Edge Node is connected to the MQTT server using MQTT, naturally.

For the purposes of getting started quickly, I’m going to assume that all the components are installed and running on the same computer.

MQTT Server (Broker)

If you’re familiar with MQTT, then you probably have an MQTT server already. If not, then look to Eclipse Mosquitto or HiveMQ Community Edition. Mosquitto is written in C and intended to be small, while HiveMQ CE is a Java implementation with a comprehensive administration and monitoring API. Both are open source. Follow the Quick Start instructions in the README in the HiveMQ CE Github repository to get up and running with it. The HiveMQ broker is also used in the Sparkplug TCK which is still under development, so I’m going to use that as an example here.

After following the quick start instructions, you should have a HiveMQ broker running with output something like this:

HiveMQ CE start output

Sparkplug Host Application

There is much less choice, at the moment, of Sparkplug Host Application than there is for MQTT broker. Once the standardization of Sparkplug is complete I expect many more implementations to surface. For now, I’m going to focus on Inductive Automation’s Ignition platform which you can download here. My experience is installing and running on Linux, so you may have to amend the procedure slightly to run on Windows.

Run the installer you’ve downloaded, choose the install location, use the default set of modules, then I prefer not to run as a service or start immediately. You can then run the Ignition server by switching to the install location and running the Linux command:

sh console
(I assume on Windows it will be just "ignition console")

Now you can point a web browser at localhost:8088. It asks you which version to install – I chose the Maker Edition for personal projects. You’ll need to create an Inductive Automation account to get a license. Once you’ve done that you can create a user account to enable you to access the Ignition server. Then I leave the ports at their default configuration. After this you should be able to start the “gateway”.

At this point, you should get the option to “Enable Quick Start”, which, probably as a new user, seems a good idea. So I did that. You’ll need the login credentials you just created.

The next job is to install the MQTT Transmission module for Ignition. There is a video about installing and configuring Transmission. The Transmission module allows Sparkplug messages to be sent to the MQTT broker containing data updates for devices and end points, in this case both simulated.

Once that is done, the Transmission module should be connected to the HiveMQ broker (or whatever MQTT broker you are using) as there is a default connection to tcp://localhost:1883. And data messages from the example configuration should be being sent to that broker.

The Quick Start configuration will have created an example Sparkplug Edge Node, but what we don’t have is a Sparkplug Host Application. For that we need the Ignition MQTT Engine module. There is another video about configuring MQTT Engine. The most important thing to make sure of is that the primary host id is set to the same value in both Transmission and Engine configurations.

The Quick Start configuration includes a device simulator which is publishing simulated data from an OPC UA device. At this point, these data update messages are not reaching the MQTT broker – we need further configuration for that. The data is being sent to a sample Ignition application though. To see that, from Home in the Ignition web console, select “View Projects” then “Launch Project” under “Sample Quick Start Project”. You will be taken to the Quick Start home page. Switch to the “Ignition 101” page to see some animations graphs of the simulated data, which you can explore.

Ignition Designer

To do that, we need another Ignition component – the Designer. On the Ignition web console home page there is a button to download it (in the Build It section).

Download the correct package for your OS, and then install or extract it. On Linux this is just a matter of extracting the tar file contents to a suitable location.

Now launch the Designer. On Linux I find it easiest to switch to the app directory and run the program. This opens the Designer Launcher window, and your installed server should be showing as a box within it. Select that box, and then press the “Open Designer” button.

Use the credentials you created to log in. Then press OPEN on the “Sample Quick Start Project” to take you to the designer. The next steps I learnt from the documentation for Sending OPC Tag Data with Transmission.

  1. Open the OPC Browser (View->Panels->OPC Browser)
  2. Expand Devices to see [Sample_Device]
  3. In the Tag Browser (bottom left hand panel in the Designer) choose “default” in the top list box.
  4. Drag the [Sample_Device] from the OPC Browser to the Tag Browser left hand column (making sure “default” is still selected).

You should now see the _Sample_Device_ tag folder in the Tag Browser window. If you expand it, it should look like this:

You can delete some of the sub-folders if you like, such as _Controls_. Leave the Ramp folder at least though.

Now, to get the data values into the MQTT broker, we need to go to the Transmission module configuration in the web console.

  • Goto MQTT TRANSMISSION Settings -> Transmitters -> Create new Settings
  • Create a name for the Transmitter
  • The “Tag Provider” is “default”
  • Everything else can be left unchanged. Press “Create New Settings”
  • Go back to the “General” pane, and “Save Changes”.

The Sparkplug data messages should now be being sent to the MQTT broker. To see the raw data, you can use any MQTT subscriber program or app. To see the details of the messages, we are going to use another Eclipse project, Tahu.

Eclipse Tahu

Get the Tahu project by cloning the git repository:

git clone

Switch to the develop branch and build:

git checkout develop
mvn clean install

Now switch to the directory where the Sparkplug listener has been built:

cd tools/java_sparkplug_b_listener/target

and run the listener:

java -jar sparkplug_b_listener-0.5.13-SNAPSHOT.jar

Check the version number of the built jar, it may be different for you. It certainly will if you’re reading this after some time has passed!

Now you should see the content of the Sparkplug messages being received. Here is an example:

Message Arrived on Sparkplug topic spBv1.0/Sample Device/NDATA/Ramp
  "timestamp" : 1642849106212,
  "metrics" : [ {
    "name" : "Ramp6",
    "timestamp" : 1642849104666,
    "dataType" : "Double",
    "value" : 349.87744

... similar entries deleted here for conciseness

    "name" : "Ramp4",
    "timestamp" : 1642849105667,
    "dataType" : "Double",
    "value" : 364.78933333333333
  } ],
  "seq" : 123

You’ll see that the message is being received on topic:

spBv1.0/Sample Device/NDATA/Ramp

The levels of which mean:

  • spBv1.0 – a prefix on Sparkplug messages to indicate the version
  • Sample Device – a group identifier
  • NDATA – message type, in this case data from an edge node (as opposed to a device attached to an edge node which would have DDATA as its message type)
  • Ramp – the edge node identifier, which has to be unique within the Sparkplug group

Each NDATA message contains a timestamp, sequence number and an array of metric data values being reported.

Publishing Data from a Device

For this last step in a quick run through of setting up a Sparkplug environment, we are going to use a Tahu utility to publish some data from a device.

We need to edit a couple of parameters in a source file first and then rebuild the project. In the file


Change the line:

private String clientId = null;

so that the clientId has a value. Indeed “anything” would be fine unless you need to differentiate from a lot of other MQTT clients:

private String clientId = "anything";

Then change:

private long PUBLISH_PERIOD = 60000; // Publish period in milliseconds


private long PUBLISH_PERIOD = 2000; // Publish period in milliseconds

so we see the results sooner. Rebuild the project with “mvn clean install”, then switch to the sparkplug_b/stand_alone_examples/java/target directory and run the example:

java -jar sparkplug_b_example-0.5.13-SNAPSHOT.jar

Now we should be publishing some data every two seconds on the topic:

spBv1.0/Sparkplug B Devices/DDATA/Java Sparkplug B Example/SparkplugBExample

Where the last two levels are the edge node id and device id respectively. Switching back to the Ignition Designer, we can see the results of those messages arriving. In the Tag Browser, switch to the MQTT Engine then expand “Edge Nodes”, “Sparkplug B Devices”, “Java Sparkplug B Example” and “SparkplugBExample”. You should see values from the incoming messages being updated, looking something like this:

Now you would be in a position to take the data from these messages and use it in a dashboard like the “Explore Ignition” in the quick start project, by using the Designer.


This is just a very quick guide to getting a Sparkplug setup going. The Ignition platform has a lot of capability to interface to a wide variety of edge nodes, devices, databases and other systems, as well as flexible UIs. As the Sparkplug specification is formalized and becomes a standard, we expect that other platforms with other focusses will become available too.

There are many other guides and help to continue with understanding and use of Sparkplug. Here are some:

Can MQTT-SN out-perform MQTT?

I don’t know of any rigorous comparisons, mainly because up to now MQTT-SN has found only limited use.

I think MQTT-SN could perform better than MQTT under certain circumstances but I wouldn’t say it’s likely as a blanket statement.  First of all there are the different characteristics of UDP and TCP.  TCP has the reliability and segmentation, so the quality of your connections and payload sizes will be a factor.  For instance, if you have an unreliable (satellite) link, you may need to retry UDP messages yourself which could be worse than letting TCP do it for you.

There is at least one scenario where I think MQTT-SN should perform better than MQTT, and I think it’s a good way of thinking about the comparison.  In IBM we used to from time to time discuss how to get MQSeries used on financial trading floors.  There, TIBCO for one, reigned supreme, and we could not make headway because of performance.  The reason the competition performed better, in terms of message latency, was because they used UDP multicast.  Where MQ used TCP client-server connections for pub-sub (not MQTT but identical topology), TIBCO publishers would send messages to a multicast group.  The filtering for topics interested in was done at the client end – all messages would be received by the client library, but only those subscribed to would be passed on to the application.  I think that content was not encrypted (for speed), because the system was limited to the self-contained and isolated trading floor.  As soon as you add more connectivity you have to think about security, auth and encryption, which slows everything down from the optimal.

A similar solution can be implemented using MQTT-SN QoS -1, at least over UDP and I think could definitely be faster.  But multicast is limited to a LAN or subnet, not available on WANs.  QoS -1 multicast is inherently unreliable – although that’s probably just fine on a network that’s not overloaded.  Whether an MQTT-SN connection oriented UDP solution using MQTT-SN QoS 0, 1 or 2 would be faster than a similar MQTT one, I’m not sure.  The differences could be marginal.

In many cases, I think the fastest solution could be a fat MQTT pipe from the cloud to the MQTT-SN gateway, then MQTT-SN multicast on the LAN.  If you want high security then you might need a connection oriented MQTT-SN solution.  Going completely MQTT-SN instead of MQTT might be faster but I wouldn’t bet on it.  And I expect many solutions will need the extra features of MQTT; they wouldn’t be able to live with the limitations of MQTT-SN.

Who is using Paho?

I stopped working for IBM in October of last year (2019) after several decades. The EclipseTM PahoTM open source project was started in 2011 by IBM under the auspices of the Eclipse Foundation. I’ve been involved in it as a contributor since the beginning. The goal was to help create a community around MQTT – I think that has been achieved.

Working on open source has been a fulfilling activity. It allowed me to have largely unfettered control over my own work, to concentrate on the doing instead of talking about doing, and get direct feedback from users. On top of that, a feel good factor of being part of the open source movement and the mission described by the OSI:

Open source enables a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of open source is higher quality, better reliability, greater flexibility, lower cost, and an end to predatory vendor lock-in.

Since leaving IBM I was motivated by that sense of fulfilment and responsibility to continue the maintenance and development of the Paho projects that I created. For a significant amount of time in recent years I was the only person from IBM to be working on Paho. Now, I’d like to be able to help out with all the Paho projects, but I don’t have enough time for that unfortunately. I’ve started by addressing at the backlog of the Paho C client. That’s going pretty well and I hope with two further releases in the near future I’ll have the issues down to a manageable level.

Fortunately since the beginning of the year, Ranjan Dasgupta from IBM has been working on the Paho Java client, so that’s one less item for me to worry about. I do plan to take a look at the Android client, and also start looking at the embedded C and MQTT-SN embedded clients, but probably not all at once.

Now we come to the main point of this post. I’ve heard mentions in recent conversations of some of the Paho client libraries being used in large projects or by significant numbers of clients of medium to large enterprises. In one respect, I sort of knew that to be the case, but it did take me somewhat by surprise. That’s maybe because now I am not employed to work on Paho, so I’m interested in the expectations that such users have for support.

While I was working at IBM we used a lot of open source software. IBM made and still makes large contributions to open source projects, both in funding and personnel. But small open source projects can find themselves left out in the arena with so many larger starry projects competing for attention. Sometimes I made small financial contributions to projects that I found myself using routinely, or were crucial, and especially if produced mainly by volunteers.

Also a few years back, we asked for any “success stories” of people and organizations using Paho components. We received a couple of replies, but I know for sure that there are many more successful deployments. If you are using a Paho software component, especially for production, then I’d like to hear from you. You can comment on this post, send email to, or contact me on Twitter.

I’d like to be able to tell the world about any Paho successes. If you do rely on any of my work in Paho, then do consider sponsoring me.

MQTT-SN Alignment with MQTT

We are starting to work on the standardization of MQTT-SN – MQTT for Sensor Networks. The current specification for MQTT-SN is in a similar position to that for MQTT before it became a standard at OASIS; it is published by IBM, freely available with a liberal license, and has been in use for several years. It is not as widely used as MQTT was at the same point, but there are several existing users and implementations. To this end, I propose that in the process of standardization we take the approach that was adopted for MQTT 3.1.1 – minimal changes with the existing specification to allow standardization to proceed as quickly as these things ever do. In the case of MQTT 3.1.1, that was about two years.

While there is a general agreement to get things moving quickly, a concern has been raised from a couple of quarters. That is, the current MQTT-SN specification was written before MQTT 5.0 existed. One of the primary goals of MQTT-SN is to extend MQTT to non-TCP networks – to do so, it must allow the easy interoperation of the two protocols. Messages from MQTT must be able to flow to MQTT-SN and vice versa. The concern is that the current MQTT-SN specification might more closely align with MQTT 3.1.1 rather than 5.0, and we should really be aiming at 5.0 as that is likely to be more frequently used in the future.

Several features in MQTT 5.0 were influenced by MQTT-SN in fact, so the flow of concepts might be towards MQTT rather than MQTT-SN. In this article, I’ll go through the aspects of MQTT-SN and see how they match up to the two MQTT standards, 3.1.1 and 5.0.

MQTT-SN Server

MQTT-SN is both a client/server and peer to peer protocol. An MQTT-SN server can be a broker in the MQTT sense, or a gateway which does little more than act as a mediator between MQTT-SN and MQTT and the underlying transports. Here I will use the term server to refer to both brokers and gateways for MQTT-SN.

Packet Format

Every MQTT packet has a header byte followed by a variable-length remaining length field. Some of the packets have multiple variable-length fields (string or binary) as part of their construction. Although MQTT packet sizes are still kept as small as is feasible, MQTT-SN is intended to be suitable for even lower power devices and used over networks with fewer reliability characteristics than TCP. Each MQTT-SN packet, apart from one possible exception, has a single variable-length field within it, so the only one length field is needed for each packet, helping to reduce their size. As a result, for instance, the MQTT connect packet has been split into several MQTT-SN packets:

  • WILLTOPICREQ – sent by the server to request that a client sends the will topic name
  • WILLTOPIC – sent by the client to tell the server its will topic name
  • WILLMSGREQ – sent by the server to request that a client sends the will message
  • WILLMSG – sent by the client to tell the server its will message
  • WILLTOPICUPD – sent by the client to update its will topic name stored in the server
  • WILLLTOPICRESP -sent by the server to confirm the will topic name has been updated
  • WILLMSGUPD – sent by the client to update its will message stored in the server
  • WILLMSGRESP – sent by the server to confirm the will message has been updated

Connect and Disconnect

The fields in the connect packet are:

  • Will flag – request will topic and message prompting
  • CleanSession – as in MQTT 3.1.1
  • ProtocolId – corresponds to protocol name and version as in both MQTT 3.1.1 and 5.0
  • Duration – keep alive timeout as in both MQTT 3.1.1 and 5.0
  • ClientId – as in both MQTT 3.1.1 and 5.0

The clean session flag operates in a similar manner to MQTT 3.1.1 in that the cleanup operates at both the start and end of the session. In MQTT 5.0, the clean session flag becomes the clean start flag, and a separate property session expiry dictates when the session state is removed on the server. The MQTT 5.0 facilities are much more flexible and I would advocate changing MQTT-SN to match. One way to achieve this would be to add a session expiry 2 byte integer field (matching the duration field) to the CONNECT packet.

The clientid variable length field is allowed to be 0-length in both MQTT 3.1.1 and 5.0, indicating that the server should assign a clientid itself. If we do allow this behaviour in MQTT-SN I think it’s important that that clientid is returned to the client, as it is in MQTT 5.0. This could be done by including the server assigned clientid in the MQTT-SN CONNACK packet, which currently has no variable length field.

The MQTT-SN DISCONNECT packet has a duration field, which operates in a similar way to the MQTT 5.0 session expiry property. In this case MQTT-SN is already closer MQTT 5.0. MQTT-SN also allows the DISCONNECT packet to be sent by the server to the client, so that the client has more information about the reason for the disconnection. This is forced on MQTT-SN anyway, as unlike for MQTT there may be no underlying session (TCP for MQTT) to break, as in UDP for instance. Again, this is closer to MQTT 5.0 than 3.1.1. The latter does not allow the server to send a DISCONNECT packet.

In fact the MQTT-SN behaviour on disconnect is more sophisticated than MQTT 5.0 (see Sleeping Clients in the MQTT-SN specification), but that doesn’t alter the fact that it is closer to 5.0 than 3.1.1.

Will Processing

The will message processing in MQTT-SN uses 8 packets as listed above, so is equally removed from both MQTT 3.1.1 and 5.0. Section 6.3 of the MQTT-SN specification lists the combination of interactions between the clean session and will flags on the connect packet – I think these would remain intact if we changed the clean session flag to clean start to match MQTT 5.0. So I feel there is no need to change the will processing in MQTT-SN to align with MQTT 5.0 more. There may be other reasons but this isn’t one.

Topic Names

In both MQTT 5.0 and MQTT-SN topic ids can be used instead of a full topic string. However, in MQTT-SN this is almost compulsory, because the PUBLISH packet’s one variable length field is the payload. The topic data is limited to a two byte field to hold a topic id (2 byte integer) or a short topic string (2 bytes). In MQTT 5.0 the topic id is registered by including it in the publish packet along with the long topic string. In MQTT-SN this registration is delegated to a separate packet, REGISTER, which must be sent before sending a PUBLISH packet. This applies to both clients and servers.

This does lead to a problem when using the PUBLISH packet in a QoS -1 mode, which is exclusive to MQTT-SN. QoS -1 in MQTT-SN means that a client can send a message to a server outside of the familiar CONNECT/DISCONNECT session start and end range. Typically this could be used in a multi-cast environment where the client is not sure of the location of the server. There are a number of proposals to allow a variable length topic name to be included in a PUBLISH packet. At least one has already been implemented, which is to use the spare topic id type indicator to specify that the topic name is variable length field, and the topic id integer holds the length. This would mean the PUBLISH packet is the only one with two variable length fields (topic name and payload). I would advocate allowing this format for all PUBLISH QoS.

MQTT-SN also has the concept of pre-registered topic ids – there is no parallel in either version of MQTT.

I feel there is no need to change to particularly align with MQTT 5.0 – any problems with the existing MQTT-SN implementation of topic ids should be fixed for their own sake.


The MQTT-SN SUBACK packet includes a return code. As in MQTT 3.1.1, the UNSUBACK packet does not. It would seem sensible to add a return code to every ack. I think the UNSUBACK is the only one without. This would mirror MQTT 5.0 too.

Other MQTT-SN Features

Other capabilities of MQTT-SN are straightforward mirrors of simple MQTT function, favouring neither one nor the other, or have no analog in MQTT at all. These include:

  • Keep alive
  • Gateway advertisement and discovery
  • Forwarder encapsulation


It turns out there are more changes that I would like to see than I was expecting before writing this article. However, I feel they are more to do with fixing up some of the more irritating MQTT-SN aspects for themselves, rather than aligning with MQTT 5.0 per se.