Getting started with MQTT and Sparkplug

Eclipse SparkplugTM is a specification for increasing MQTT interoperability by defining topic and payload contents, and the interaction of devices and monitoring applications. It is being standardized at the Eclipse Foundation. Sparkplug was intended for industrial IoT applications but there is growing interest outside that application area.

There are many resources for Sparkplug which you can find by searching for “MQTT Sparkplug”, but here are a few we’ll be using:

The intent of this article is to get you up and running with a working Sparkplug implementation quickly so you can see it in action, not just read the specification. If you do want to read the specification, go to the Sparkplug Github repository, checkout the develop branch and build it according to the README (basically gradlew build). If you want to get a high-level view of Sparkplug first, go to the Conclusion of this article for more links.

We need at least three components in a working Sparkplug setup:

  1. An MQTT server (or broker)
  2. A Sparkplug Host Application (server) which can monitor and control edge nodes and devices
  3. Sparkplug client side implementations which can act as Sparkplug edge nodes and devices

A Sparkplug Device is the network end point for data to be obtained from, and commands sent to. In an industrial IoT environment, such a device may be attached to a PLC by one of a variety of industrial protocols, but probably not MQTT. The PLC can act as a Sparkplug Edge Node, which is a contentrator passing messages to and from devices attached to it. The Edge Node is also a device in that it can also have data sent from it and commands sent to it. The Edge Node is connected to the MQTT server using MQTT, naturally.

For the purposes of getting started quickly, I’m going to assume that all the components are installed and running on the same computer.

MQTT Server (Broker)

If you’re familiar with MQTT, then you probably have an MQTT server already. If not, then look to Eclipse Mosquitto or HiveMQ Community Edition. Mosquitto is written in C and intended to be small, while HiveMQ CE is a Java implementation with a comprehensive administration and monitoring API. Both are open source. Follow the Quick Start instructions in the README in the HiveMQ CE Github repository to get up and running with it. The HiveMQ broker is also used in the Sparkplug TCK which is still under development, so I’m going to use that as an example here.

After following the quick start instructions, you should have a HiveMQ broker running with output something like this:

HiveMQ CE start output

Sparkplug Host Application

There is much less choice, at the moment, of Sparkplug Host Application than there is for MQTT broker. Once the standardization of Sparkplug is complete I expect many more implementations to surface. For now, I’m going to focus on Inductive Automation’s Ignition platform which you can download here. My experience is installing and running on Linux, so you may have to amend the procedure slightly to run on Windows.

Run the installer you’ve downloaded, choose the install location, use the default set of modules, then I prefer not to run as a service or start immediately. You can then run the Ignition server by switching to the install location and running the Linux command:

sh ignition.sh console
(I assume on Windows it will be just "ignition console")

Now you can point a web browser at localhost:8088. It asks you which version to install – I chose the Maker Edition for personal projects. You’ll need to create an Inductive Automation account to get a license. Once you’ve done that you can create a user account to enable you to access the Ignition server. Then I leave the ports at their default configuration. After this you should be able to start the “gateway”.

At this point, you should get the option to “Enable Quick Start”, which, probably as a new user, seems a good idea. So I did that. You’ll need the login credentials you just created.

The next job is to install the MQTT Transmission module for Ignition. There is a video about installing and configuring Transmission. The Transmission module allows Sparkplug messages to be sent to the MQTT broker containing data updates for devices and end points, in this case both simulated.

Once that is done, the Transmission module should be connected to the HiveMQ broker (or whatever MQTT broker you are using) as there is a default connection to tcp://localhost:1883. And data messages from the example configuration should be being sent to that broker.

The Quick Start configuration will have created an example Sparkplug Edge Node, but what we don’t have is a Sparkplug Host Application. For that we need the Ignition MQTT Engine module. There is another video about configuring MQTT Engine. The most important thing to make sure of is that the primary host id is set to the same value in both Transmission and Engine configurations.

The Quick Start configuration includes a device simulator which is publishing simulated data from an OPC UA device. At this point, these data update messages are not reaching the MQTT broker – we need further configuration for that. The data is being sent to a sample Ignition application though. To see that, from Home in the Ignition web console, select “View Projects” then “Launch Project” under “Sample Quick Start Project”. You will be taken to the Quick Start home page. Switch to the “Ignition 101” page to see some animations graphs of the simulated data, which you can explore.

Ignition Designer

To do that, we need another Ignition component – the Designer. On the Ignition web console home page there is a button to download it (in the Build It section).

Download the correct package for your OS, and then install or extract it. On Linux this is just a matter of extracting the tar file contents to a suitable location.

Now launch the Designer. On Linux I find it easiest to switch to the app directory and run the designerlauncher.sh program. This opens the Designer Launcher window, and your installed server should be showing as a box within it. Select that box, and then press the “Open Designer” button.

Use the credentials you created to log in. Then press OPEN on the “Sample Quick Start Project” to take you to the designer. The next steps I learnt from the documentation for Sending OPC Tag Data with Transmission.

  1. Open the OPC Browser (View->Panels->OPC Browser)
  2. Expand Devices to see [Sample_Device]
  3. In the Tag Browser (bottom left hand panel in the Designer) choose “default” in the top list box.
  4. Drag the [Sample_Device] from the OPC Browser to the Tag Browser left hand column (making sure “default” is still selected).

You should now see the _Sample_Device_ tag folder in the Tag Browser window. If you expand it, it should look like this:

You can delete some of the sub-folders if you like, such as _Controls_. Leave the Ramp folder at least though.

Now, to get the data values into the MQTT broker, we need to go to the Transmission module configuration in the web console.

  • Goto MQTT TRANSMISSION Settings -> Transmitters -> Create new Settings
  • Create a name for the Transmitter
  • The “Tag Provider” is “default”
  • Everything else can be left unchanged. Press “Create New Settings”
  • Go back to the “General” pane, and “Save Changes”.

The Sparkplug data messages should now be being sent to the MQTT broker. To see the raw data, you can use any MQTT subscriber program or app. To see the details of the messages, we are going to use another Eclipse project, Tahu.

Eclipse Tahu

Get the Tahu project by cloning the git repository:

git clone https://github.com/eclipse/tahu.git

Switch to the develop branch and build:

git checkout develop
mvn clean install

Now switch to the directory where the Sparkplug listener has been built:

cd tools/java_sparkplug_b_listener/target

and run the listener:

java -jar sparkplug_b_listener-0.5.13-SNAPSHOT.jar

Check the version number of the built jar, it may be different for you. It certainly will if you’re reading this after some time has passed!

Now you should see the content of the Sparkplug messages being received. Here is an example:

Message Arrived on Sparkplug topic spBv1.0/Sample Device/NDATA/Ramp
{
  "timestamp" : 1642849106212,
  "metrics" : [ {
    "name" : "Ramp6",
    "timestamp" : 1642849104666,
    "dataType" : "Double",
    "value" : 349.87744
  }, 

... similar entries deleted here for conciseness

 {
    "name" : "Ramp4",
    "timestamp" : 1642849105667,
    "dataType" : "Double",
    "value" : 364.78933333333333
  } ],
  "seq" : 123
}

You’ll see that the message is being received on topic:

spBv1.0/Sample Device/NDATA/Ramp

The levels of which mean:

  • spBv1.0 – a prefix on Sparkplug messages to indicate the version
  • Sample Device – a group identifier
  • NDATA – message type, in this case data from an edge node (as opposed to a device attached to an edge node which would have DDATA as its message type)
  • Ramp – the edge node identifier, which has to be unique within the Sparkplug group

Each NDATA message contains a timestamp, sequence number and an array of metric data values being reported.

Publishing Data from a Device

For this last step in a quick run through of setting up a Sparkplug environment, we are going to use a Tahu utility to publish some data from a device.

We need to edit a couple of parameters in a source file first and then rebuild the project. In the file

sparkplug_b/stand_alone_examples/java/src/main/java/org/eclipse/tahu/SparkplugExample.java

Change the line:

private String clientId = null;

so that the clientId has a value. Indeed “anything” would be fine unless you need to differentiate from a lot of other MQTT clients:

private String clientId = "anything";

Then change:

private long PUBLISH_PERIOD = 60000; // Publish period in milliseconds

to

private long PUBLISH_PERIOD = 2000; // Publish period in milliseconds

so we see the results sooner. Rebuild the project with “mvn clean install”, then switch to the sparkplug_b/stand_alone_examples/java/target directory and run the example:

java -jar sparkplug_b_example-0.5.13-SNAPSHOT.jar

Now we should be publishing some data every two seconds on the topic:

spBv1.0/Sparkplug B Devices/DDATA/Java Sparkplug B Example/SparkplugBExample

Where the last two levels are the edge node id and device id respectively. Switching back to the Ignition Designer, we can see the results of those messages arriving. In the Tag Browser, switch to the MQTT Engine then expand “Edge Nodes”, “Sparkplug B Devices”, “Java Sparkplug B Example” and “SparkplugBExample”. You should see values from the incoming messages being updated, looking something like this:

Now you would be in a position to take the data from these messages and use it in a dashboard like the “Explore Ignition” in the quick start project, by using the Designer.

Conclusion

This is just a very quick guide to getting a Sparkplug setup going. The Ignition platform has a lot of capability to interface to a wide variety of edge nodes, devices, databases and other systems, as well as flexible UIs. As the Sparkplug specification is formalized and becomes a standard, we expect that other platforms with other focusses will become available too.

There are many other guides and help to continue with understanding and use of Sparkplug. Here are some:

MQTT-SN Standardization Progress

Good News! The MQTT-SN standardization is progressing nicely. That isn’t to say at warp speed, but at a steady pace in the right direction with no significant obstacles so far. Of course, someone might decide to discover one in response to this article, but it’s better to know about any now if they exist.

Looking back, I think we had two major objectives, apart from the benefit of standardization itself:

  • To take into account experiences of implementation and improve them. MQTT-SN was devised in a different era of MQTT and networks: things have moved on.
  • To align more closely with MQTT 5.0 which had been developed in the meantime. Ironically, in the odd case such as cleanstart/cleansession, this means moving back in time to pre-MQTT 3.1.1.

Simon Johnson brought the experience of ThingStream’s use of MQTT-SN in a production environment into the fray. We started with that, along with some alignment with MQTT 5.0. I thought at the beginning that I would prefer the fewest changes we could get away with, but as our discussions proceeded I realized there were more changes I would like to see adopted.

So we’ve ended up with a list of about 40 issues. Through the end of 2020 and the first few months of 2021 we proposed solutions to those and had them agreed. In the meantime, Andy Banks had updated the original source specification to the format used at OASIS and started making basic corrections.

Since then, Simon and I have updated the draft specification with the issue proposals to result in the latest working draft. At this stage, or with a few minor extra changes, we will have completed the technical updates needed to implement the new version of MQTT-SN.

Next Steps

There is still a lot of work before the specification is complete: the non-normative sections, introduction, conformance statements and formalization (RFC-izing) of the language for example. I estimate all this will take six months at least.

The immediate steps are:

  • Take the current working draft to the MQTT Technical Committee to hear any objections to our direction of travel. I don’t expect any, but I want to find out.
  • Assuming no objections, we can then embark on the process of completing the non-normative sections and rest of the updates.

In the meantime, Simon and I are planning implementations, Simon in Java, mine in Python. If anyone else wants to start implementing then that would be welcome. Having some implementation experiences is necessary for the specification to become an OASIS standard, as well as double checking our work.

I hope that later in the year, we will have a specification ready for release as a first Committee Specification Draft (CSD), along with a couple of implementations. Reaching a final standard should then be possible in 4Q this year or 1Q the next.

One Thing! – Security

I heard, through Simon, that CoAP has added functionality to deter security issues. I didn’t know what that was, but a quick search resulted in hits about DDoS attacks. From a quick read, I think that MQTT-SN isn’t vulnerable to the same type of attacks, but perhaps QoS -1 publishes (without a connection) could be used and should be thought about carefully before use. If anyone has any thoughts or more information on this topic, do speak up.

MQTT-SN Authentication

This discussion is part of the standardization of MQTT-SN at OASIS. Here is a summary of the current issues we are working on. The issue for authentication is 568.

In the current MQTT-SN specification (1.2), there is no authentication capability included apart from the client id. This applied to the early versions of MQTT as well, so it’s not that surprising as MQTT-SN was derived from MQTT.

Later, userid and password fields were added to the MQTT connect packet, with optional external TLS encryption applied to the TCP/IP connection. In MQTT 5.0 an enhanced authentication capability was added by means of the new AUTH packet.

It’s generally agreed that we should add authentication to MQTT-SN. There are a few considerations which are different to MQTT however. MQTT is aimed at low powered devices, but MQTT-SN is notionally aimed even lower at connectionless network communications such as UDP where packet sizes are limited and TCP capabilities such as fragmentation do not exist.

So we need to be even more aware of data overhead, bloat and efficiency than we do with MQTT. In the base version of MQTT-SN, 1.2, there is a maximum of only one variable length field in each packet. This means that there is no length field apart from that for the whole packet, unlike in MQTT where there are many variable length fields each with its own length. So MQTT-SN is more efficient in terms of the size of each packet, at the expense sometimes of multiplying the number of packets needed. The MQTT-SN will message operations are an example of that.

The MQTT-SN 1.2 connect packet is structured like this:

Length n
(byte 0)
MsgType
(1)
Flags
(2)
ProtocolId
(3)
Duration
(4,5)
ClientId
(6:n-1)
MQTT-SN 1.2 CONNECT packet

The flags byte is an identical format for all the packets that use it, and is fully used – I’ll come back to that in a moment.

To add authentication to MQTT-SN I propose that an additional AUTH flag is added to the connect packet, which tells the server that an AUTH packet is going to be sent following the CONNECT packet. The AUTH packet consists of the following fields:

Length n
(byte 0)
MsgType
(1)
Length m
(2)
Auth Method
(3:m+2)
Auth Data
(m+3:n-1)
Proposed MQTT-SN AUTH packet with two variable length fields

This is designed to allow the operation of SASL framework authentication mechanisms and follows MQTT 5.0 closely. The current list of registered mechanisms includes PLAIN, a single message including the following fields:

  • authorization identity
  • authentication identity
  • password

each separated by a NUL (U+0000) character. Using this mechanism allows the CONNECT packet to be followed by an AUTH packet which includes a userid (authentication identity) and password, and so caters for the simple MQTT like case.

More sophisticated methods can also be used – this should allow any of the registered SASL mechanisms to be used. The server can also send an AUTH packet, either before a CONNACK, or during a session for re-authentication of the client.

It would be neater from a packet structure point of view if the authentication method were a fixed rather than variable length field. However, SASL mechanism names have a maximum length of 20 characters, so including a one-byte length field is going to be more efficient just about all the time.

Another alternative could be to have a one byte code for authentication method. But someone would have to maintain the table of translations from codes to meaning, or leave it to be implementation independent, hindering application portability.

Finally, to fit in the AUTH flag to the connect packet, some space will have to be made. As the CONNECT flags are a completely different set to those used in the other packets I propose that they are separated. The CONNECT flags:

x
(7)
x
(6)
x
(5)
x
(4)
Will
(3)
CleanStart
(2)
x
(1)
Auth
(0)
Proposed CONNECT flags; x means reserved

DUP
(bit 7)
QoS
(6,5)
Retain
(4)
x
(3)
x
(2)
TopicIdType
(1,0)
Proposed flags for other packets

I have left the positions of the existing fields unchanged for continuity, but they could be changed for neatness.

So this is my current first draft proposal. I’d like to get any thoughts that any of you might have. I haven’t used the SASL mechanisms myself so hearing from anyone who has could be particularly useful.

Can MQTT-SN out-perform MQTT?

I don’t know of any rigorous comparisons, mainly because up to now MQTT-SN has found only limited use.

I think MQTT-SN could perform better than MQTT under certain circumstances but I wouldn’t say it’s likely as a blanket statement.  First of all there are the different characteristics of UDP and TCP.  TCP has the reliability and segmentation, so the quality of your connections and payload sizes will be a factor.  For instance, if you have an unreliable (satellite) link, you may need to retry UDP messages yourself which could be worse than letting TCP do it for you.

There is at least one scenario where I think MQTT-SN should perform better than MQTT, and I think it’s a good way of thinking about the comparison.  In IBM we used to from time to time discuss how to get MQSeries used on financial trading floors.  There, TIBCO for one, reigned supreme, and we could not make headway because of performance.  The reason the competition performed better, in terms of message latency, was because they used UDP multicast.  Where MQ used TCP client-server connections for pub-sub (not MQTT but identical topology), TIBCO publishers would send messages to a multicast group.  The filtering for topics interested in was done at the client end – all messages would be received by the client library, but only those subscribed to would be passed on to the application.  I think that content was not encrypted (for speed), because the system was limited to the self-contained and isolated trading floor.  As soon as you add more connectivity you have to think about security, auth and encryption, which slows everything down from the optimal.

A similar solution can be implemented using MQTT-SN QoS -1, at least over UDP and I think could definitely be faster.  But multicast is limited to a LAN or subnet, not available on WANs.  QoS -1 multicast is inherently unreliable – although that’s probably just fine on a network that’s not overloaded.  Whether an MQTT-SN connection oriented UDP solution using MQTT-SN QoS 0, 1 or 2 would be faster than a similar MQTT one, I’m not sure.  The differences could be marginal.

In many cases, I think the fastest solution could be a fat MQTT pipe from the cloud to the MQTT-SN gateway, then MQTT-SN multicast on the LAN.  If you want high security then you might need a connection oriented MQTT-SN solution.  Going completely MQTT-SN instead of MQTT might be faster but I wouldn’t bet on it.  And I expect many solutions will need the extra features of MQTT; they wouldn’t be able to live with the limitations of MQTT-SN.

Who is using Paho?

I stopped working for IBM in October of last year (2019) after several decades. The EclipseTM PahoTM open source project was started in 2011 by IBM under the auspices of the Eclipse Foundation. I’ve been involved in it as a contributor since the beginning. The goal was to help create a community around MQTT – I think that has been achieved.

Working on open source has been a fulfilling activity. It allowed me to have largely unfettered control over my own work, to concentrate on the doing instead of talking about doing, and get direct feedback from users. On top of that, a feel good factor of being part of the open source movement and the mission described by the OSI:

Open source enables a development method for software that harnesses the power of distributed peer review and transparency of process. The promise of open source is higher quality, better reliability, greater flexibility, lower cost, and an end to predatory vendor lock-in.

Since leaving IBM I was motivated by that sense of fulfilment and responsibility to continue the maintenance and development of the Paho projects that I created. For a significant amount of time in recent years I was the only person from IBM to be working on Paho. Now, I’d like to be able to help out with all the Paho projects, but I don’t have enough time for that unfortunately. I’ve started by addressing at the backlog of the Paho C client. That’s going pretty well and I hope with two further releases in the near future I’ll have the issues down to a manageable level.

Fortunately since the beginning of the year, Ranjan Dasgupta from IBM has been working on the Paho Java client, so that’s one less item for me to worry about. I do plan to take a look at the Android client, and also start looking at the embedded C and MQTT-SN embedded clients, but probably not all at once.

Now we come to the main point of this post. I’ve heard mentions in recent conversations of some of the Paho client libraries being used in large projects or by significant numbers of clients of medium to large enterprises. In one respect, I sort of knew that to be the case, but it did take me somewhat by surprise. That’s maybe because now I am not employed to work on Paho, so I’m interested in the expectations that such users have for support.

While I was working at IBM we used a lot of open source software. IBM made and still makes large contributions to open source projects, both in funding and personnel. But small open source projects can find themselves left out in the arena with so many larger starry projects competing for attention. Sometimes I made small financial contributions to projects that I found myself using routinely, or were crucial, and especially if produced mainly by volunteers.

Also a few years back, we asked for any “success stories” of people and organizations using Paho components. We received a couple of replies, but I know for sure that there are many more successful deployments. If you are using a Paho software component, especially for production, then I’d like to hear from you. You can comment on this post, send email to paho-success@eclipse.org, or contact me on Twitter.

I’d like to be able to tell the world about any Paho successes. If you do rely on any of my work in Paho, then do consider sponsoring me.

MQTT-SN Alignment with MQTT

We are starting to work on the standardization of MQTT-SN – MQTT for Sensor Networks. The current specification for MQTT-SN is in a similar position to that for MQTT before it became a standard at OASIS; it is published by IBM, freely available with a liberal license, and has been in use for several years. It is not as widely used as MQTT was at the same point, but there are several existing users and implementations. To this end, I propose that in the process of standardization we take the approach that was adopted for MQTT 3.1.1 – minimal changes with the existing specification to allow standardization to proceed as quickly as these things ever do. In the case of MQTT 3.1.1, that was about two years.

While there is a general agreement to get things moving quickly, a concern has been raised from a couple of quarters. That is, the current MQTT-SN specification was written before MQTT 5.0 existed. One of the primary goals of MQTT-SN is to extend MQTT to non-TCP networks – to do so, it must allow the easy interoperation of the two protocols. Messages from MQTT must be able to flow to MQTT-SN and vice versa. The concern is that the current MQTT-SN specification might more closely align with MQTT 3.1.1 rather than 5.0, and we should really be aiming at 5.0 as that is likely to be more frequently used in the future.

Several features in MQTT 5.0 were influenced by MQTT-SN in fact, so the flow of concepts might be towards MQTT rather than MQTT-SN. In this article, I’ll go through the aspects of MQTT-SN and see how they match up to the two MQTT standards, 3.1.1 and 5.0.

MQTT-SN Server

MQTT-SN is both a client/server and peer to peer protocol. An MQTT-SN server can be a broker in the MQTT sense, or a gateway which does little more than act as a mediator between MQTT-SN and MQTT and the underlying transports. Here I will use the term server to refer to both brokers and gateways for MQTT-SN.

Packet Format

Every MQTT packet has a header byte followed by a variable-length remaining length field. Some of the packets have multiple variable-length fields (string or binary) as part of their construction. Although MQTT packet sizes are still kept as small as is feasible, MQTT-SN is intended to be suitable for even lower power devices and used over networks with fewer reliability characteristics than TCP. Each MQTT-SN packet, apart from one possible exception, has a single variable-length field within it, so the only one length field is needed for each packet, helping to reduce their size. As a result, for instance, the MQTT connect packet has been split into several MQTT-SN packets:

  • WILLTOPICREQ – sent by the server to request that a client sends the will topic name
  • WILLTOPIC – sent by the client to tell the server its will topic name
  • WILLMSGREQ – sent by the server to request that a client sends the will message
  • WILLMSG – sent by the client to tell the server its will message
  • WILLTOPICUPD – sent by the client to update its will topic name stored in the server
  • WILLLTOPICRESP -sent by the server to confirm the will topic name has been updated
  • WILLMSGUPD – sent by the client to update its will message stored in the server
  • WILLMSGRESP – sent by the server to confirm the will message has been updated

Connect and Disconnect

The fields in the connect packet are:

  • Will flag – request will topic and message prompting
  • CleanSession – as in MQTT 3.1.1
  • ProtocolId – corresponds to protocol name and version as in both MQTT 3.1.1 and 5.0
  • Duration – keep alive timeout as in both MQTT 3.1.1 and 5.0
  • ClientId – as in both MQTT 3.1.1 and 5.0

The clean session flag operates in a similar manner to MQTT 3.1.1 in that the cleanup operates at both the start and end of the session. In MQTT 5.0, the clean session flag becomes the clean start flag, and a separate property session expiry dictates when the session state is removed on the server. The MQTT 5.0 facilities are much more flexible and I would advocate changing MQTT-SN to match. One way to achieve this would be to add a session expiry 2 byte integer field (matching the duration field) to the CONNECT packet.

The clientid variable length field is allowed to be 0-length in both MQTT 3.1.1 and 5.0, indicating that the server should assign a clientid itself. If we do allow this behaviour in MQTT-SN I think it’s important that that clientid is returned to the client, as it is in MQTT 5.0. This could be done by including the server assigned clientid in the MQTT-SN CONNACK packet, which currently has no variable length field.

The MQTT-SN DISCONNECT packet has a duration field, which operates in a similar way to the MQTT 5.0 session expiry property. In this case MQTT-SN is already closer MQTT 5.0. MQTT-SN also allows the DISCONNECT packet to be sent by the server to the client, so that the client has more information about the reason for the disconnection. This is forced on MQTT-SN anyway, as unlike for MQTT there may be no underlying session (TCP for MQTT) to break, as in UDP for instance. Again, this is closer to MQTT 5.0 than 3.1.1. The latter does not allow the server to send a DISCONNECT packet.

In fact the MQTT-SN behaviour on disconnect is more sophisticated than MQTT 5.0 (see Sleeping Clients in the MQTT-SN specification), but that doesn’t alter the fact that it is closer to 5.0 than 3.1.1.

Will Processing

The will message processing in MQTT-SN uses 8 packets as listed above, so is equally removed from both MQTT 3.1.1 and 5.0. Section 6.3 of the MQTT-SN specification lists the combination of interactions between the clean session and will flags on the connect packet – I think these would remain intact if we changed the clean session flag to clean start to match MQTT 5.0. So I feel there is no need to change the will processing in MQTT-SN to align with MQTT 5.0 more. There may be other reasons but this isn’t one.

Topic Names

In both MQTT 5.0 and MQTT-SN topic ids can be used instead of a full topic string. However, in MQTT-SN this is almost compulsory, because the PUBLISH packet’s one variable length field is the payload. The topic data is limited to a two byte field to hold a topic id (2 byte integer) or a short topic string (2 bytes). In MQTT 5.0 the topic id is registered by including it in the publish packet along with the long topic string. In MQTT-SN this registration is delegated to a separate packet, REGISTER, which must be sent before sending a PUBLISH packet. This applies to both clients and servers.

This does lead to a problem when using the PUBLISH packet in a QoS -1 mode, which is exclusive to MQTT-SN. QoS -1 in MQTT-SN means that a client can send a message to a server outside of the familiar CONNECT/DISCONNECT session start and end range. Typically this could be used in a multi-cast environment where the client is not sure of the location of the server. There are a number of proposals to allow a variable length topic name to be included in a PUBLISH packet. At least one has already been implemented, which is to use the spare topic id type indicator to specify that the topic name is variable length field, and the topic id integer holds the length. This would mean the PUBLISH packet is the only one with two variable length fields (topic name and payload). I would advocate allowing this format for all PUBLISH QoS.

MQTT-SN also has the concept of pre-registered topic ids – there is no parallel in either version of MQTT.

I feel there is no need to change to particularly align with MQTT 5.0 – any problems with the existing MQTT-SN implementation of topic ids should be fixed for their own sake.

Subscribe/Unsubscribe

The MQTT-SN SUBACK packet includes a return code. As in MQTT 3.1.1, the UNSUBACK packet does not. It would seem sensible to add a return code to every ack. I think the UNSUBACK is the only one without. This would mirror MQTT 5.0 too.

Other MQTT-SN Features

Other capabilities of MQTT-SN are straightforward mirrors of simple MQTT function, favouring neither one nor the other, or have no analog in MQTT at all. These include:

  • PING and PINGRESP
  • Keep alive
  • Gateway advertisement and discovery
  • Forwarder encapsulation

Conclusion

It turns out there are more changes that I would like to see than I was expecting before writing this article. However, I feel they are more to do with fixing up some of the more irritating MQTT-SN aspects for themselves, rather than aligning with MQTT 5.0 per se.

Eclipse Paho Sponsorship by HiveMQ

It’s now three months since I left IBM and I’m starting to get used to the idea. I’ve been dealing with some of the Eclipse Paho C client backlog – firstly the PRs, and now onto some of the issues. It’s much easier to keep things under control with steady effort rather than in bursts with long gaps. Apart from not allowing the backlog to accumulate into a formidable looking mountain, it also means that familiarity with the code doesn’t deteriorate as far.

That’s just the first part of my backlog though. The Java client MQTT V5 support is almost complete – it mostly needs the addition of new tests, both MQTT V5 specific and for base MQTT function. I would like to finish that off, ideally in collaboration with someone who could become the Paho committer responsible for the Java client in future. Apart from the V5 support, I don’t expect to work on the Java client on my own time, so someone does need to pick that up if it is to continue. I’m having discussions with both IBM and Microsoft on this topic, but I’ve not had any demonstration of commitment yet.

Then there are the Embedded MQTT and MQTT-SN projects which I’ve not looked at for at least 18 months. I originally created both of these, along with the main C client project, so I am interested in keeping these going. But that will have to start with assessing the backlog. I am aided in The MQTT-SN project by one of my colleagues, Tomoaki, who I hope to start working with again soon.

The MQTT-SN project crosses over with the MQTT-SN OASIS standardization effort, of which I am co-chair, which will kick off soon. I hope that the standardization of MQTT-SN will allow it to extend the MQTT ecosystem beyond pure TCP networks and into peer to peer meshes.

I think it’s fair to say that over the last decade I’ve been the glue that has held the Paho project together. I’ve stepped in when needed on the Java client, and last year added MQTT V5 support to the Python client library. I wrote an MQTT V5 broker so that we could test our implementations early.

Paho Contributions

But as I am no longer being paid to work on Paho, some action is likely needed to allow the project to continue in an active form. I have other interests I want to pursue, and I expect they will take up more of my time. The main types of contribution are people and money. We do have contributions all the time in the form of issues and pull requests, which is great, but we need committers to handle that work, resolve the issues, assess the pull requests and create releases.

Shortly before Christmas, I discovered that Github has introduced the notion of sponsorship, Github Sponsors. So as an experiment, I’ve signed up. If you rely on the project for some component, especially the ones I work on, or want to show your support for the project’s continuation, this is one way to do it. The sponsorship tiers were set by me. I took a guess at convenient levels, so I can change them to a certain extent if that helps.

Of course, no matter how much money I might get in sponsorship, I only have a finite amount of time available. This is why I suggest that organizations which have an interest in the survival of Paho to consider encouraging their own people to become more deeply involved in the project. The deepest involvement is becoming a committer and project leadership.

If there is insufficient interest to take actions to keep the project going then I am happy for the project or specific components, like the Java client, to be deprecated. So it’s really up to the community to let me know what you want, to demonstrate that you want the project to continue.

HiveMQ Sponsorship

I’ve had a discussion with HiveMQ already as I know the company has clients who use Paho to connect with their MQTT brokers. They do want the Paho project to continue and have agree to sponsor me for $500 per month. I appreciate that for small organizations, time and effort is proportionally more expensive than for larger organizations, so a financial contribution might be the easiest option.

I would like to thank HiveMQ for getting the ball rolling with Paho sponsorship for me.

The Future of Paho

Since October 10th, I am no longer an employee of IBM. It still comes as a surprise to me to write those words as I worked for IBM for almost my entire life in one way or another since leaving university. Now I’m just starting to get used to the idea.

It wasn’t my idea to start with, to leave IBM. Then at first, a job with another company was in the offing, but after a while that opportunity disappeared. (If you want to know more details, then if we meet in person I might divulge some more over a coffee or beer.) In the meantime, I had considered the possibility of retiring. To my surprise, I realized that it was possible to do so. It was a surprise because I had assumed I would be working until my mid-60s before I could afford to leave work.

During the preparation to leave and now that I have, I have been thinking about the future of the Eclipse Paho project. I’ve been project lead since 2014, and was one of the original contributors in 2011. My main contribution was the C MQTT client, then embedded C MQTT and MQTT-SN client libraries and Python MQTT test material. In the meantime, when there was no one else to look after the Java client library, I looked after that too.

I originally became project lead because the project progress had stalled – I stepped into the vacant space to do the job that needed doing. Since then I think it’s fair to say I have been the continuity behind Paho, ensuring its survival and success.

I intend to continue supporting and the C clients and other components I created, and also to lead the project. I hope to finish off the first release of the Java client for MQTT V5, based on the Eclipse Vert.x asynchronous event library.

Another person in IBM has been identified to help out with the Java client, project leadership and so on, but that transition didn’t start until shortly before I left so I don’t know how well that’s going to work out.

My intention is to carry on contributing, but have other personal projects and activities that I also want to pursue. Over time, undoubtedly my priorities will change, although obviously I can’t predict how.

There is a message I’d like to send to anyone or any organization has a reliance on any Eclipse Paho components or the project in general. That is, consider making contributions in some form to encourage its continued existence. The greatest contribution would be regular contributors who could become committers responsible for maintenance and updates. But I would welcome other ideas. I am also open to offers of work on MQTT related projects including Paho support.

Eclipse IoT Day Singapore

In July or August of this year, Benjamin CabĂ© of the Eclipse Foundation asked if there were any volunteers for presenting at an Eclipse IoT day in Singapore. As I had already given talks on MQTT V5 earlier in the year at the Eclipse IoT day in Santa Clara and EclipseCon France, I thought, why not? We do have a lot of interest from all over the world in MQTT, IoT and Eclipse, so it was a good opportunity to reach part of the world I don’t normally see in a professional capacity.

I should have known how far Singapore is from the UK, as I’ve stopped off there once or twice when travelling to New Zealand to visit family in recent years. Evidently I’d forgotten, as when I realised how far away Singapore is, I did have some second thoughts. Well, I thought, I’ll let the IBM travel authorization decision make the choice for me. As it happens I found some flights at a very reasonable cost, the travel authorization was granted, and my travel plans confirmed.

I am happy to stay at home and not travel extensively, so this year has been unusual for me. But the completion of the MQTT 5.0 standardization process is a significant milestone, and people want to hear about the standard and its implementations, so I’ve made the effort. The first stage was getting to Singapore, which involved two 7 hour flights, arriving on the Monday morning of the week of the conference. This was the day after the Singapore F1 GP had taken place, so the course was still being dismantled. It took the taxi driver an extra 25 minutes to reach the hotel because of the continuing road closures!

After I had a few hours sleep, Benjamin and I met up later in the day and took a walk through the Gardens by the Bay which is the one sight that I definitely wanted to see in Singapore. We had some crushed sugar cane to drink, and chicken rice to eat, all of which was very satisfying. After taking some photos of the garden I walked back to the hotel, which was still complicated by the road closures, but I made it.

After a fairly good night’s sleep, I made the short walk from the hotel to the Conference Center using the bay Double Helix footbridge. The conference centre is opposite the Marina Bay Sands hotel (the one with the ‘ship’ on top of three towers) which always features extensively in the F1 TV coverage.

After the morning coffee and registration, Benjamin was first up with the State of The Union of Eclipse IoT, summarizing the progress that has been made over the past six or seven years. The room was full to overflowing, and the interest high.

Eclipse IoT State of the Union

Then I was up, largely giving a re-rerun of the talks I had given in California and France earlier in the year. The videos of these talks, as well as the others recorded on those days, are available on YouTube. As before, I ran through a quick history of MQTT, the reasons for defining a new version relatively soon after 3.1.1 had been finished, the new features of version 5.0, and a quick demo of Paho support.

Where next for MQTT?

My demo included sending messages over MQTT 5.0 to IBM’s IoT Platform which has announced beta support for this latest version of MQTT.

To explain again, the reason for another version of MQTT now is that the first standardized version, 3.1.1, was limited in the scope for changes to:

  • reach a completed standard quickly, and
  • be compatible with existing implementations.

This left some outstanding irritants. MQTT 5.0 attempts to fix them while still conforming to the goals of being lightweight and simple.

During the breaks I had chance to talk with a collaborator of mine on the Eclipse Paho project. Tomoaki and I have been working together on MQTT-SN projects for several years, separately for a number of years before then. As Tomoaki lives in Japan, I never thought I would meet him in person, but he made the long journey to Singapore so that we could. It was an extremely pleasant and productive meeting, as we discussed other potential MQTT-SN activities such as support for DTLS.

Ian and Tomoaki!

When the photo was taken I was also enjoying the warm tropical rain. Rain in the UK is always cold – here it felt like it was almost evaporating before it hit the ground, due to the high temperature.

The last session of the morning was Oliver Meili of Bosch SI describing the company’s extensive involvement with Eclipse IoT.

Bosh SI Eclipse IoT Projects

The lunch break was followed by talks on the ioFog, Vorto and Cyclone DDS Eclipse IoT projects. ioFog is an approach to edge computing developed by a startup company embracing open source at its core. Vorto enables translation between IoT model definitions in a variety of formats, and Cyclone is an open source DDS implementation.

The variety of Eclipse IoT projects available now is impressive. You can discover the full range at the Eclipse IoT website.

After the day’s sessions, I met with Tomoaki, then Kilton Hopkins (the driving force behind ioFog) and Benjamin for stimulating discussions, food and drink before retiring to the hotel. The following two days I spent mostly at the Eclipse IoT booth at IoT World, answering questions about the Eclipse Foundation, the Eclipse IoT portfolio, IBM and other companies’ contributions to Eclipse IoT, and the Eclipse Paho project and MQTT. There was unfamiliarity with all of these topics, so the trip was well worthwhile, even given the amount of travelling involved.

Using MQTT V5 with the IBM Watson IoT Platform and the Eclipse Paho C client

I’ve just finished off release 1.3 of the Eclipse Paho C client which includes MQTT 5.0 and WebSockets support. Another thing I’ve done is to update the command line utilities to be much more capable, so in this post I’ll describe how they can be used to connect to the IBM Watson IoT Platform using MQTT V5.

MQTT V5 is the latest version of the successful protocol which is the core of Paho’s capability. I’ve delivered a number of talks about MQTT V5, if you’d like to learn more about it that way. I’ve also previously written about how the C client command line utilities can use MQTT V5 to connect an MQTT server which supports V5 – we have one within the Paho project.

First you need to install the Eclipse Paho C client, if you haven’t already. As described on the linked page, the easiest way on Windows is probably to install the pre-built binaries, while on Linux and the Mac, for the time being, building from source. Running the command

paho_c_pub

will check that it runs, and display the full list of options.

IBM’s IoT Platform has a quick start playground, where you don’t have to sign up to try sending events. We’re going to connect to that first of all. Go to the web page, accept IBM’s terms of use, then enter a device id in the input box. This can be any sequence of characters you like (there are some restrictions and there is a length limit) – its purpose is to distinguish your device from other peoples’. So make it something unique enough, then push ‘Go’.

To connect the Paho publisher program to the platform with MQTT version 5, run the command:

paho_c_pub -c ssl://quickstart.messaging.internetofthings.ibmcloud.com:8883 -t iot-2/evt/myevent/fmt/json -i d:quickstart:my_device_type:my_device_id -V 5

replacing my_device_id with the device id you typed into the IoT Platform input box. Now the program is waiting for you to type an input message. The message should be in JSON format, because that’s what the quick start application is expecting, so type something like this:

{"t":2}

and press enter. You should see a point appear on the graph. Try sending a few more values, varying the number but keeping the rest the same, and you should see a line graph drawn something like this:

And that’s it! The -V 5 option at the end of the command line indicates that MQTT 5.0 was used rather than MQTT 3.1.1. There are two MQTT V5 specific options taken currently by the utility – user-property and message-expiry, although the platform doesn’t make use of them yet. The “–trace protocol” option will display details of the TLS exchange and MQTT packets sent and received.

If you already use the platform, to connect to an existing organization, use:

paho_c_pub -c ssl://orgid.messaging.internetofthings.ibmcloud.com:8883 -t iot-2/evt/event-id/fmt/json -i d:orgid:device-type:device-id --password auth-token --username use-token-auth -V5 

substituting appropriate values:

orgid – your organization id
event-id – the event id you want send events on
device-type – your device type
device-id – your device id
auth-token – the authentication token of the device previously defined

You can then send event data in the same way.