Real Time Delivery Updates with Slack, MongoDB Atlas & Tailable Cursors
If you’re anything like me, being quick and effective when taking on a job of any size is of the utmost importance. As a developer, I use a handful of applications on a daily basis to automate mundane small tasks, so that I can focus my energy on the crucial, complex undertakings and ensure that I’m at my highest levels of productivity. One of my favorite tools is Slack, because it is so heavily integrated into my workflow and is like the web that ties all of my tools and tasks together.
In the event you’ve been living in a cave for the last couple of years, Slack is the team communication app for the 21st century. Slack has been built to be easy and fun for teams to use AND it offers a broad set of APIs that allow developers to extend its capabilities to make it even more useful (and funny). One of the features I love from Slack is the Slackbot, a friendly robot available in every Slack team to guide users through creating their profiles and to explain to them how Slack works. Slack has several built-in bots (handy little assistants that hang out in your app, wait for commands, and then find or create the thing you need) and allows you to create your own.
What enthralls me most about Slack is the available potential to build your own custom bots. At the most basic level, bots in Slack are special automated users that can respond to specific events and do useful things to help your team be more productive. The possibilities of bots you can build are endless and the challenge of reaching these outer limits and innovating in ways no one else is makes it all the more fun to pursue.
In this post, we’ll set out to create a custom Slack chat bot that will simulate the ordering of a pizza, and tells when your pie is ready, in addition to its location at any given time. In order to create this bot, we’ll piece together a few popular technologies — Node.js, Slack, MongoDB Atlas, and MongoDB Compass. Additionally, we’ll be making use of MongoDB Tailable Cursors for real time notifications.
In order to follow along with this tutorial, we’ll assume that you have basic Node.js knowledge, including an installed version of Node.js and NPM. You will also need to have a basic understanding of the command line, and AWS EC2.
# Create a New Bot for Your Slack Integration
Because I love food, especially when it comes to me, I thought it would be fun to walk through the process of building a bot that will give you the eta on how soon you can start chowing on some delivery pizza. To add a new bot in your Slack organization, visit https://[yourorganization].slack.com/services/new/bot, where yourorganization must be substituted with the name of your organization (e.g. https://studio5eleven.slack.com/services/new/bot).
Tip: Ensure you are logged in to your Slack organization in your browser and that you have the admin rights to create a new bot.
Step 1: First, you need to choose a name for your bot. Mr.ThinCrust seemed appropriate for this little guy:
Step 2: Next, you will move to another screen where you will copy your API token:
Copy the token in a safe place and save it, you will need it in a little bit.
In this section you can also specify some more details about your bot, like adding an avatar image to make your bot look unique.
# Sign Up For MongoDB Atlas (DBaaS)
MongoDB Atlas makes it easy to set up, operate, and scale your MongoDB deployments in the cloud. From high availability to scalability, security to disaster recovery — MongoDB Atlas has you covered.
To get started, head on over to https://mongodb.com/cloud/atlas and create an account.
# Build Your New MongoDB Cluster
MongoDB Atlas will walk you through the process of setting up your cluster in a handful of intuitive steps, so you don’t have to worry about knowing what specs to choose if you’re not a veteran (or even if you are).
For the purpose of this application, we’re going to name our Cluster Name “thin-crust”, however, you’re welcome to name your cluster whatever you’d like. For the rest of the setup, we are going to stick with the lowest level requirements, as this is only a prototype and we don’t anticipate a large surge of traffic.
Note: You are required to provide a username and password for your cluster. I would suggest that you choose the option to let MongoDB Atlas generate a random password on your behalf. Please ensure you store this in a safe place (maybe with that API Token you wrote down when creating your Slack Bot 😉 ), as you will need it later on in this tutorial.
Once you have completed the provisioning steps, click “Confirm & Deploy” and your MongoDB Cluster will automatically be provisioned by MongoDB Atlas. Once complete, you’ll receive a notification that your cluster has been provisioned and you will be able to click on the “Connect” button as shown below:
Make sure you copy the URL connection string from this step and save this with your password. You will need this value in order to move forward.
# Connecting to MongoDB Atlas with Compass
If you’re like me, you will probably enjoy using a nice graphical interface to explore your MongoDB Cluster. Thankfully, the wonderful folks over at MongoDB put together an amazing tool called Compass, a GUI for MongoDB that is honestly the easiest way to explore and manipulate your MongoDB data. It works seamlessly with MongoDB Atlas, so I’ve chosen to use it in this tutorial. Compass is available on Linux, Mac, or Windows. To download Compass, head on over to https://www.mongodb.com/products/compass.
Once installed, open the application and you’ll be presented with a clean UI asking for login credentials to your cluster. Use the URL connection string (along with the password that you saved) that was provided in the dashboard in the last step to get you in.
# Creating Your Capped Collection
A MongoDB capped collection supports tailable cursors, which allow MongoDB to push data to the listeners. If this type of cursor reaches the end of the result set, instead of returning with an exception, it blocks until new documents are inserted into the collection, returning the new document.
Capped collections also have extremely high performance. In fact, MongoDB uses a capped collection internally, for storing their operations log (or oplog). One thing to note is that, as a trade-off for their high performance, capped collections are fixed (i.e. “capped” in size) and, therefore, not shardable. Keeping things like this in mind can help you choose the most effective tool for the job as you continue to pursue new projects; it is always good to know what tools you have and when to use them.
To create your capped collection, click on “Create Collection” and name the collection “messages”. Click on the “Capped Collection” checkbox and specify a capped collection of 8,000,000 bytes (ample space for this demo).
# Jumping into the Code
As previously mentioned, we’ll be using Node.js for the tutorial. The code is rather straightforward. So as long as you understand the basics of programming, you should be able to follow along quite well.
Our Node application (or “thin-crust”, as I call it) consists of three primary files, with each file having its own set of tasks:
- Index.js — Initializes the Slackbot and posts the initial message to Slack, followed by an update when the pizza is finished leaving the oven.
- Worker.js — Connects the the MongoDB cluster and inserts random street addresses into the newly-created “messages” collection.
- Slack.js — Connects to Slack and the MongoDB cluster, listening to incoming updates to the capped messages collection, and posts messages to the specified (food) channel.
File 1: Index.js
File 2: Worker.js
File 3: Slack.js
The full repository can be downloaded from GitHub:
# Deploying Your Application on AWS
Given that our scripts are lightweight, we don’t need a beefy server to power our set of scripts. Node.js runs on a single core, and although there are ways to run on multiple cores, we don’t need to.
With that in mind, we’ll be using PM2, a lightweight process manager for Node.js. PM2 will essentially kick off the applications processes (scripts) and ensure that they stay online, should something cause them to fail.
PM2 can run on just about any server, however, for our objectives, we’ll use a general purpose t2.nano server on AWS, with Ubuntu 16.4 LTS. I’ll touch briefly on the steps required to provision a server, but most of the documentation is available online, so I’ll leave more extensive research up to you.
Step 1: Create an account, or login to your existing AWS account. Click on Compute Services > EC2 > Create Instance > Launch Instance. This will get you set up with a new instance for our project.
Step 2: Select “Ubuntu Server 16.04 LTS (HVM), SSD Volume Type” from the Server Types and click “Next”. Next, choose the t2.nano Instance Type from the list of available options. To finish, follow the necessary steps (keeping everything default) to provision the server. Once provisioned, you should see the Instance State as “Running” on your EC2 Dashboard.
Step 3: Click the checkbox next to the instance and click “Connect” from the top navigation. This will provide a popup dialog that will give you the connection string you need in order to move forward.
Step 4: Using the connection string provided, SSH into the instance and run sudo su. From there, navigate to the root directory by running cd ~. Next, we’ll install Node.js, NPM, and PM2.
Step 5: Run the commands in the following order:
Step 6: Clone the code to the server by running git clone firstname.lastname@example.org:nparsons08/thin-crust.git. Once cloned to your server, move into the thin-crust directory and run the command nano process.json to open the process.json for editing. Once open, drop in your Slack Token and MongoDB Atlas Connection String. Then, you can successfully start your application with the following command:
Note: If you’d like to see what’s going on behind the scenes with pm2, you can run pm2 list to output all active processes. More information on PM2 can be found on their official website here: http://pm2.keymetrics.io/
# The Application in Action
So now that you’ve seen what’s under the hood, you understand that the codebase is rather straightforward and simple to comprehend, but what does it look like in production? On the user side, something along the lines of the following would appear in your Slack channel:
# Final Thoughts
Queues are a powerful mechanism for describing interoperating, but independent processes. There are tens, if not hundreds, of commercially viable solutions, however, MongoDB serves as a capable message-queue because of its flexible document storage capabilities, wide-variety of supported languages and tailable cursor “push” feature.
Marshalling and unmarshalling of arbitrarily complex JSON messages is handled automatically. Safe-writes are enabled for improved message durability and reliability, and tailable cursors are used to “push” data from MongoDB to Node.js.
While this application is simply a proof of concept at the moment, there are dozens of additions that could be added to better serve end-users. Here are a few ideas for improvements should you decide to contribute back to the project:
- Use a mobile application to gather actual coordinates (latitude and longitude) and store them in MongoDB for further enhancements to the application. More on geospatial operators can be found here: https://docs.mongodb.com/manual/applications/geospatial-indexes/
- Utilize Google Maps or Mapbox to display the location in Slack, providing a visual representation of the location.
- Provide an ETA for delivery using the location and a directions API such as https://www.mapbox.com/directions/.
- Let the user know if there has been a delay in their order process due to heavy traffic using one of the many traffic APIs currently available on the market. Currently, Waze has an amazing SDK for developers which can be found here: https://www.waze.com/sdk.
As you can imagine, the possibilities are endless. I’m curious to hear your thoughts on additions and improvements in the comments below. Cheers and happy coding!
Real Time Delivery Updates with Slack, MongoDB Atlas & Tailable Cursors was originally published in Hacker Noon on Medium, where people are continuing the conversation by highlighting and responding to this story.