Paris Trip Notes – Day 1

Today, I fly to France for a week of work.  What better way to capture memories of the trip than to create a blog post for each day.

Word(s) of the day:

FrenchEnglishNotesAudio
Bonjour!HelloGreeting used for the daytime. When it gets later, you may use Bonsoir. Bon means good. Jour means day. Soir means evening.
Parlez-vous Anglais?Do you speak English?Pretty important when you’re in a country who’s language you don’t speak. I may also whip out the Je ne parlez pas francais which means, I don’t speak french.

Franco-fact

France became a republic in 1792 as a result of the French Revolution against centuries of royal rule.  The Revolution started with the storming of the Bastille fortress on July 14th, 1789.  This event is celebrated every year all over France and is referred to as Bastille Day.

What is the Bastille, you ask?  (That’s what I asked.)

The Bastille was a political prison that was built in the late 1300’s to house criminals and enemies of the French state.

There were approximately 1000 revolutionaries that stormed the Bastille on that day and they were mostly craftsmen and store owners who lived in Paris.

The revolutionaries were members of a French social class called the Third Estate.   The First Estate was the clergy, the Second Estate was the nobility.

The reason they stormed the Bastille was primarily due in large part to massive famine, and extremely high bread prices… Hold up… Bread?  They rioted and overthrew the government because of bread?

Yep.  As it turns out, in the late 1700’s, the average french citizen’s diet was made up primarily of bread and soup.  According to Smithsonian.com – Prior to 1788 the average french wage earner spent half their income on bread.  Then, in 1788 and 1789, the grain crops failed and the price of bread shot up to over 88 percent of the average wage earners income.

Apps I used for this trip

I don’t speak french… I should probably say that right out of the gate. So – I thought might be good to get an app to help me learn the basics. There are PLENTY. I really focused on the reviews. I tried several free apps, along with downloading several podcasts but became frustrated with the quality and approach. Eventually I settled on 2 that I really feel are valuable.

SpeakEasy French

This app is from a company called PocketGlow. More information is available from http://pocketglow.com/sf.  

SpeakEasy French Navigation

There are two versions of this app… obviously, a free version and a paid version.  The free version did a great job for a very few number of phrases and words.  I liked the interface so I sprung for the paid version.  At $3.99, I must say it’s more than I usually spend so quickly – without more investigation but it hasn’t let me down.

The interface lets you navigation starting with categories of words, such as communication, emergency, borders/customs, etc.  I like this because the amount of time I have to memorize is limited.  Therefore, I’m more likely to need a reference in the moment.

DuoLingo

Duolingo feels a bit childish… but I have to admit, it’s effective.  The repetition and multifaceted nature of the learning methods are very effective.

Not sure how much I’m retaining – but it feels like it’s working… will keep you posted on progress.

Photo of the Day

Mike in the Airport
Waiting for the first leg of the flight. From PHL to JFK, JFK to CDG

Paris Trip Notes – Day minus 1

Sun 10-1Mon 10-2Tue 10-3Wed 10-4Thu 10-5Fri 10-6

Today, is September 30th, 2017.  T-minus 16 Hours and counting until I fly to Paris, France.  Not much planning or packing left to do.  I feel fortunate to be able to make the trip.  I’ll be visiting colleagues from MongoDB’s Paris Office and will be lending a hand with some customer meetings for the week.

I thought it might be interesting to share details of the trip with my friends and family through this blog… so here’s the plan:

Each day – I’ll be in France for 5 days, I’ll create a blog post and an accompanying video.  Each post will have some vocabulary words I’m trying to learn, some facts I’m trying to learn about France, a photo or two and a bit about the plan for the day and what I’ve experienced.

Like what you see?  Let me know in the comments – or on social media.  Want to know something about Paris, France – or want me to take a photo… let me know!

Word(s) of the day:

FrenchEnglishNotes
Excusez-moi, où est ___? Excuse me, where is ___? Definitely will have a need to find my way around… I’m thinking this one will pay off quickly.
Où se trouvent les toilettes?Where is the bathroom?And what would be more important that finding a restroom?

Franco-fact:

France is the world’s most popular tourist destination – some 83.7 million visitors arrived in France, according to the World Tourism Organization report published in 2014, making it the world’s most-visited country.

Photo of the Day

Ok – not a photo I took – but it’s a google street view of the MongoDB office in Paris.  See you in about 24 hours.

Sizing MongoDB: An exercise in knowing the unknowable

Into the Unknown: How many servers do I need?  How many CPU’s, how much memory?  How fast should the storage be?  Do I need to I shard?

As part of my job as a Solutions Architect, I’m asked to help provide guidance and recommendations for sizing infrastructure that will run MongoDB databases.  In nearly every case, I feel like Nostradamus.  That is to say, I feel like I’m being asked to predict the future.

In this article, I’ll talk about the process I use to get as close to comfortable with a prediction as possible – essentially, to know the unknowable.

Charting the Unknown

Let’s start out with some basic MongoDB Sizing Theory.  In order to adequately size a MongoDB deployment, you need to understand the following

  1. Total Amount of Data that will be stored.
  2. Frequently accessed documents
  3. Read / Write Profile of the application
  4. Indexes that will be leveraged by the application to read data efficiently

These four key elements will help you build what is known as the Working Set.  The Working Set is the total amount of data, plus indexes that you will try to fit into RAM.

Wait a minute…, how can it be unknowable?  How is it possible that I’m not be able to know my performance requirements?

Ok – this may be exaggeration, or at least a bit of hyperbole but if you’ve ever completed an exercise in MongoDB sizing for a live production application, you’ll completely agree or at least understand.

The reason I chose the word “unknowable” is because it’s literally impossible to know every possible data point required to ensure that your server resource meets or exceeds the requirements 100% of the time.  This is because most application environments are not closed.  They are changeable and in many cases we are at the mercy of an unpredictable user population.

The best we can hope for is close.  The rest, we will leave up to the flexible, scalable architecture that MongoDB brings to the table.

When it comes right down to it, there are a lot of things we know… or at least can predict with pretty good accuracy when it comes to an application running in production.  Let’s start with the data.  Here’s where we employ good discovery technique.

To understand how MongoDB will perform, you must understand the following elements:

  • Data Volume – How much data will our application manage and store?
  • Application Read Profile – How will the application access this data?
  • Application Write Profile – How will the data be updated or written?

Data Volume

How much data will you be storing in MongoDB?  How many databases?  How many collections in each database?  How many documents in each collection?  What size will the average document be in each of these collections?

This requires a knowledge of your applications, of course.  What data will the applications be managing?  Let’s start with an example.  People and Cars are elements of data to which that everyone can relate.  Let’s imagine, we’re writing an application that helps us keep track of a group of people (our users) and their inventory of cars.

To start, let’s look at the projected number of users of our application: How many users’ car inventories will we be managing with our application and database.  Assume we’re going large scale and we expect to take on approximately 1 billion users.  Each user will own and manage approximately 2-3 automobiles and a few service records for each car.

The better we understand our app – as data modelers, the better chance we have of deploying resources in support of the database that will match the application requirements.  Therefore, let’s dig a bit deeper to understand the data volumes and let’s look at the documents.  What do the People documents and the Cars documents actually look like?  In its most simple form, our document model may look something like the following.

PeopleCars: Avg Doc Size: 1024bytes, # of Docs 1b

In this example, I’m expressing the relationship between people and their cars through embedding.  This leaves us with a single collection for People and their Cars.  In reality, you may require a more diverse mix – so let’s include a linked example collection for service records.  Imagine for our purposes that each person will have on average 10 service records per person.

Service:  Avg Doc Size: 350bytes, # of Docs: 10b

Here’s what our architecture might look like visually:

Let’s do the math:  (1b users * 1024bytes) + (10b * 350bytes) =

1024000000000 + 3500000000000 = 4524000000000 = 4.524TB

Given the estimated users, their cars and their service records, we’ll be storing approximately 4.5TB of data in our database.   As stated previously, the goal in sizing MongoDB is to fit the working set into RAM.  So, to get an accurate assessment of the working set – we need to know how much of that 4.5TB will be accessed on a regular basis.

In many sizing exercises, we’re asked to estimateLet’s assume at any given time during any given day, approximately 30-40% of our user population is actively logged in and reviewing their data.  That would mean that we would have 35% * 1b users * 1024 bytes (user documents) plus 35% * 10b service docs * 350 bytes or…

(.35 * 1b * 1024) = 358400000000
plus
(.35 * 10b * 350) = 1225000000000
———————————————-
equals
1583400000000 or 1.6TB

The last bit of information we need to understand is the indexes we’ll maintain so that the application can swiftly, and efficiently access the data.  Let’s assume that for each People and each Service Record document we’ll create an index document that’s approximately 100bytes in size… or 11b * 100 bytes = 1.1TB

So our total working set will consist of 1.6TB of frequently accessed data and 1.1TB of index for a total working set size of 2.7TB.

Application Read / Write Profile

Your application is going to be writing, updating, reading and deleting your MongoDB data.  Each of these activities is going to consume resource from the servers on which MongoDB is running.  Therefore, to ensure that the performance of MongoDB is going to be acceptable, we should really understand the nature of these actions.

How many reads?  How frequent?

Understanding how many reads, and what data you’ll be accessing as well as how frequently you’ll be reading is critical to ensure that your databases have enough memory to store these frequently accessed documents.

In many cases, knowing this will require estimation.  Here is where we’ll attempt to know the unknown.

In our example application, assume we’ll have an active user population of anywhere between 30% and 40% of the total number of users in our database, or 35% of 1 billion users which will be 350,000,000 users.  Let’s finish out the math.  With 350m users, that means MongoDB will be regularly accessing 350m user documents each approximately 1k in size.  Additionally, each user will likely be accessing their service records – so – 350m users – each having 3 cars with at least 5 service records – let’s assume each accessed the system, thereby causing the application to fetch all of their People documents (350m) and all of their Service Record Documents (3 cars * 5 service records each at 350bytes each).

350m users * 3cars * 5 service records * 350bytes = 1837500000000bytes or 1.83tb

How many writes?

As important as reads are, so too is understanding how many writes, and what the size and frequency will be.  This will probably be the most important factor that will determine the disk IOPS rating you will need to support your use case.

If we continue our imaginary example, you can probably guess that the application, as I’ve described it will not provide a great deal of write workload.  People looking at their car inventories, and reviewing their service records doesn’t exactly sound like a high bandwidth, low-latency requirement.

However, it will be in our best interest to do the math to ensure our infrastructure can support our workload.

Let’s ask some questions.  Regardless of the actual details of your application, the questions are always the same.  What is the data?  How often will it change?  How does this change impact the total data stored?

In our example case the questions will be as follows:

  • How often will users be added?
    • 1m users per day
    • With 1m user additions, we’ll be looking at a daily incremental storage requirement of 1m * 1024bytes or 1GB.  This incremental value is likely negligible for most disk subsystems.
  • How often will service records be added?
    • 10m service updates per day
    • With 10m service updates, we’ll need to support a daily incremental storage requirement of 10m * 350bytes or 3.5GB.  Again – not monumental.

With both people and service records, we’re going to need to ensure that our infrastructure can support a write profile of at least 5GB per day.  The next logical question to ask is WHEN are these updates completed?  Based on what we know about our data and our application, the users will most likely come in at random periods – but let’s say we don’t want to make any assumptions and we want to understand what kind of load this will place on our disks.

We typically measure write performance in terms of IOPs – Input Output Operations Per Second and to understand how much data we’ll be able to move in terms of IOPS, consider the following:

IOPS*TransferSizeInBytes=BytesPerSec

Let’s take a look at what modern disk subsystems can accomplish in terms of IOPs.

  • HDDs: Small reads – 175 IOPs, Small writes – 280 IOPs
  • Flash SSDs: Small reads – 1075 IOPs (6x), Small writes – 21 IOPs (0.1x)
  • DRAM SSDs: Small reads – 4091 IOPs (23x), Small writes – 4184 IOPs (14x)

For this exercise, we’ll assume the fastest disks available for random, small write workloads: DRAM SSDs.

To Shard or Not to Shard

In order for us to determine whether or not we will need to shard, or partition our database, we need to figure out whether or not we’ll be able to provision a service with enough RAM to support our working set.

Do you have servers with in excess of 2.7TB of RAM?  Probably not.  Then let’s take a look at sharding.

What is sharding?
Sharding is the process of storing data records across multiple machines and is MongoDB’s approach to meeting the demands of data growth. As the size of the data increases, a single machine may not be sufficient to store the data nor provide an acceptable read and write throughput.

The most common goal of sharding is to store and manipulate a larger amount of data at a greater throughput than that which a single server can manage.  (You may also shard, or partition your data to accomplish data locality or residency using zone-based sharding… but we’re going to leave that for another article.)

To determine the total number of partitions we’ll roughly divide the total required data size from our working set by the amount of memory available in each server we’ll use for a partition.

If you’re fortunate enough to be ordering server hardware prior to deployment of your application, make your server order so that each server has the most amount of ram you can afford.  This will limit the number of shards and enable you to scale in the future should it be required.

For the sake of this exercise, let’s assume our standard server profile is equipped with 256GB of RAM.  In order to safely fit our working set into memory, we would want to partition the data in such a way that we created (2.7TB/256GB) or 11 partitions (rounded up, of course.)

In future articles, we’ll discuss in further detail the process of determining exactly how to partition or shard your data.

Conclusion

In summary, we’ve answered the question, how do we go about sizing for a MongoDB deployment – or – how do I go about coming to know the unknowable?  We looked at the data, and the access patterns of that data.  We worked through an example and found that there are really no shortcuts – we must understand the data and how it will be manipulated and managed.

Lastly, we came to a conclusion – an educated guess about the number of servers, and the amount of RAM that will be required for each.  I want to stress that Sizing MongoDB is part art, and part science.  You can rarely, if ever get all of the facts so to bridge the path of uncertainty, we use educated guesses and then we test… we search for empirical data to support our hypotheses and we test again.  You will do yourself a great disservice if you try to size your MongoDB deployment and you neglect this fact.  You must test your sizing predictions and adjust where you see deviations in the patters associated with your application – or your test harness.

If you have a challenge or a project in front of you where you need to deploy server resource for a new MongoDB deployment, let me know.  Reach out via the contact page, or hit me up on LinkedIn and let me know how you’re making out with your project.

 

Moving from Tables to Documents with MongoDB

I’m going to ask you to set aside your concept of “proper data modeling” and “3rd normal form.”  Going forward in this article, those concepts will hold you back.  Some DBA’s and data modelers become angry at this suggestion.  If that’s you… welcome – but please hold judgement until you read this complete post.

Data normalization focuses on organization of the data for the purpose of eliminating duplication.  It’s a data-focused approach.  Not an application-focused approach.

With document data modeling, we flip the script and we turn to our application to answer the questions about how we organize the data.  Before we get into exactly how to do this, let’s delve a bit into the possible structures of a document.

MongoDB is document-based.  That is to say that it stores data in documents.  JSON-like documents to be more specific.  Most people, might think of Microsoft Word Documents, or PDF documents when I mention the word document – and while it is true that MongoDB can store these types of documents, what I’m really talking about is JSON documents.

JSON

JavaScript Object Notation (JSON)  is a lightweight data-interchange format. It is easy for humans to read and write. It is easy for machines to parse and generate.  It is based on a subset of the JavaScript Programming Language.

When you store data in JSON documents, you approach data modeling quite differently than as with Relational technologies.  You see, relational technologies were developed in the 1970’s and 1980’s when disk space was extremely expensive.  Thousands and even 10’s of Thousands of dollars per gigabyte of disk was not unusual early on.  So to preserve this most valuable resource, relational technologies developed the concept of data normalization with a set of rules.

Normalization

Normalization is the systematic method of deconstructing data into tables to eliminate redundancy and undesirable characteristics like Insertion, Update and Deletion Anomalies.  It is a multi-step process that puts data into tables: rows and columns, by removing duplicated data from the tables.

Normalization is used for mainly two purposes:

  • Eliminate redundant data.
  • Ensure data dependencies make sense i.e data is logically stored in line with the core objectives stated above.

Normalization techniques and the rules associated with it are all well and good if you intend to leverage a relational database technology.  However, as discussed, MongoDB is document-based… i.e. non-relational.

That is not to say that you cannot define and maintain relationships between data elements in your document model.  However, this is not a primary constraint when building a document-based data model.

Rich Data Structures

JSON documents are relatively simple structures.  They begin with a curly-brace and end with a curly-brace.  In between these braces, you have a set of key value pairs delimited by commas.  Here’s an example:

In my example, I’ve tidied things up using indents (spaces before the keys) but this is not necessary.  The above example is extremely simple.  These structures can get quite complex and rich.  The above example includes keys, and values.  The keys in all cases with JSON are strings.  The values however, can be string, numeric, decimal, Dates, Arrays, Objects, Arrays of Embedded Objects, and so forth.  Let’s look at a more complex example:

As you can see, the values don’t have to be simple strings or numbers.  They can be quite complex.  Now, if you’re aware of JSON, you might be saying something like – wait a minute, JSON only supports strings and numbers… and you’d be correct.

At the beginning of this article, I stated specifically that MongoDB stores data in JSON-like documents.  We actually, store the data in BSON documents.  BSON is a Binary representation of the JSON document.  You can read all about this standard at bsonspec.org.

We use BSON so that we can honor the types not supported by JSON… to make it easier for developers to store rich data types and not have to marshal them back from their non-native forms.  When you write a decimal in MongoDB, and then read it back – it comes to you via the drive in decimal form.

Now that we understand a bit about how MongoDB stores and organizes data in document structures, let’s address migrating data from a relational structure to a document-based data model with MongoDB.

Let’s use the obligatory Books and Authors example, not because it’s stunningly brilliant, no – because I’m lazy and it happens to be something to which we can all relate.

Consider the following ERD.

In this simple example, we have two tables.  Authors, and Books.  There is a relationship expressed between these two tables in that Books have an Author.  Rather than storing this data together, we’re storing it separately and expressing the relationship through LINKING.

With MongoDB, we can store this very same information but instead of linking between two disparately, separate locations, we can EMBED the same data.  Consider the following:

In this example, we’ve designed a document structure by creating an actual document.  Notice in the previous, relational example, we created an ERD, or Entity Relationship diagram.  This same ERD may be useful for us as we model our data in documents… but the difference is that with MongoDB, there is no separate, distinct schema.  The schema does not live separately from the actual documents.

In atomic and molecular physics, there’s a concept known as the observer effect.  This applies here to the concept of a schema with MongoDB.  If you don’t look at the data, the schema does not exist.  It’s not until you observe the data do you see that a schema defining what keys / values you have truly exists.

Now, you may begin to wonder something along the lines of what if a data element in the subordinate changes?  What if a subdocument element such as book title changes?  Unlikely, I suppose but possible.  And since we’re storing book titles inside of an Author Record, and possibly even storing the very same information such as book title, description, etc. in another collection specific to these data elements, how will we address this change?  ARE YOU SAYING WE MAY HAVE TO UPDATE THE DATA MORE THAN ONCE!?!

Yes.

Calm down.  We’re not under the same constraints as relational developers.  We own the destiny of our document structures as well as the content.

We can change data multiple times, in multiple locations.

But… but but that’s wrong.  I feel your terror.  It’s not wrong because we don’t adhere to data normalization rules.  Who cares?  Who cares if we store data in multiple locations – we are the czar of our data and we control when its written with our code.  We are no longer beholden to a schema wielding dba.  We are that dba.  If this feels wrong, you’re not alone.  But trust me the benefits of this approach far out-way the drawbacks.

Benefits of the Document Model Approach

Benefit One: Data Locality – Data that’s accessed together is stored together

When we toss out normalization we gain the notion of “Data that’s accessed together is stored together”, otherwise known as data locality.  An Author document contains all relevant details about an author including the books he or she has written.  When my application needs this data, it issues a read and in most cases, a single read fetches ALL of the data needed.  In relational, or normalized data, a single read gets me perhaps a single row in a single table and then I need to issue another read to get the related data from an additional table.  Multiple reads equals multiple disk seeks equals slower performance for my application.

Benefit Two: Readability

When all the data that’s accessed together is stored together, it’s just logical – it makes sense – you can see it all at once in a document.  Whereas with relational technologies, you must issue SQL commands with JOIN clauses to pull data from multiple locations.  Much less readable.

Benefit Three: Flexibility and Agility

When we store data in documents, adding, removing or modifying the data structures is much easier.  There literally is no governing schema.  We simply modify the code we use to update the data in the database.  We have no external schema to modify.  Therefore, we gain the flexibility to make these changes without stopping the database… without issuing an “alter table” command.

Conclusion

In this first of a series of articles on migrating from relational to documents, we’ve looked at how data is stored in MongoDB, what documents are, the structure of JSON and BSON and explored just a few of the benefits.  While the examples are basic, I hope these have illustrated the power and flexibility of this modern approach to data storage.

In my next article, I’ll tackle a bit more challenging relational schema and convert that to documents and incorporate the code used to maintain the data.

If you’re interested in this topic but need a more structured approach to enablement and learning, MongoDB has amazing resources to help you wrap your mind around the document model.  I highly recommend MongoDB University  if you’re new – or trying to improve your knowledge of MongoDB.

Please leave a comment, ask a question or reach out to me on LinkedIn with feedback.

 

 

Deploying MongoDB Enterprise with Ansible

I’ve been asked about this subject several times, so I thought it might be best to put some thoughts into a blog post and share it.

For the purposes of this article, I’m going to assume you have ansible installed.  If you need help with that specifically, refer to the ansible.com site for specific documentation on installation.

Question: Why does your image refer to Opsmanager?  I thought we were going to cover Ansible.

Answer: Opsmanager can accomplish many things over and above what Ansible covers.  Monitoring, Automating, Optimizing and Backing up your MongoDB installation.  I won’t cover Opsmanager in this article but if you’re running MongoDB in production, I highly recommend looking into Opsmanager.

Ansible is an incredible tool.  It’s also been referred to as “SSH Configuration Management on Steroids.”  Explaining what Ansible is and how it works is beyond the scope of this article.  I will however, provide some basic details specific to the application of Ansible around the problem of deploying MongoDB.

Ansible leverages SSH to enable you to manage, and automate the process of configuring and maintaining configuration on a number of servers.

Inventory

The first thing to know about Ansible is that it requires knowledge of the servers you’ll be managing using the tool.  This knowledge is maintained using an inventory file.

Ansible works against multiple systems in your infrastructure at the same time. It does this by selecting portions of systems listed in Ansible’s inventory file, which defaults to being saved in the location /etc/ansible/hosts. You can specify a different inventory file using the -i <path> option on the command line.

Not only is this inventory configurable, but you can also use multiple inventory files at the same time (explained below) and also pull inventory from dynamic or cloud sources, as described in Dynamic Inventory.  This is a mind-blowing concept for some.  A dynamic inventory is one that can change… it’s not a static file, it’s a script that returns a list of servers.

For the time being, let’s leave the dynamic capability to the side… and let’s create a static file containing the names of the servers on which we’ll install MongoDB.

Where you see “[mongodb]” – this is a group indicator.  It tells Ansible that the next lines will be servers that should be a part of the group indicated… in this case “mongodb”.  The string mongodb is arbitrary and could be anything… “MyServers” would work just as well.  It’s later when we write some Ansible commands that we’ll refer to these servers as a group – and the group name will be important.

Where you see foo.example.com and bar.example.com. these are the fully qualified domain names of the servers on which you’ll be installing mongodb.  If you’re installing a replica set – you’ll most likely have three.

As I’m writing this article, I have 3 servers deployed in AWS/EC2 that I’ll be using.  So – here’s what my inventory file looks like:

Server Access and Connectivity

Ok, so now we’ve defined our universe of servers, let’s talk about how we’re going to leverage Ansible to effect change on these servers.  Ansible uses SSH to connect and manage servers.  In order for this to happen, you need to give Ansible the appropriate credentials.

Ansible will assume you have SSH access available to your servers, usually based on SSH-Key.  Because Ansible uses SSH, the server it’s on needs to be able to SSH into the inventory servers. It will attempt to connect as the current user it is being run as.

Getting Busy

Ok, now we understand the servers on which we’ll install MongoDB, as well as the mechanism with which we’ll access and impact change on those servers, let’s get busy and do something.

In its very basic form, Ansible can be used from command line to do things with your server inventory.  The most basic command you can try right now is Ping… Ping sends a UDP packet over the network to those servers and verifies connectivity.

In this simple  example, I’m using the ansible command, specifying the location of my inventory file and calling an ansible command module called ping against a group called “ReplicaSet.”  Ansible responds with the output and results of the command I ran.

Automation

Ansible’s nature is to automate things.  So, naturally, you can automate the process of telling Ansible things about your configuration or your environment.  The inventory file, for example – that can be set in your env so you don’t have to use the -i switch each and every time you run a command.

In this way, I’ve now set my inventory file in my environment and I no longer have to use the -i switch.  So I can simply type the following to achieve the same output as previously.

Additionally, where I’m leveraging the -m switch to specify the module I want to use, I can instead, use another command and move the actual work I want accomplished to another file called a playbook.  Ansible playbooks are like scripts that describe the work you want ansible to accomplish.

Playbooks leverage YAML – Yet Another Markup Language.  This is a straight forward, easy to read configuration language.  You’ll get the hang of it quickly.  Here’s an example of the previous ping command represented in Playbook, YAML Format:

And here’s what that looks like when we execute it:

If you’re playing along at home, place the YAML code for the ping command into a file called ping.yml.  Then execute the command ansible-playbook ping.yml.

So ping is awesome but what about mongodb?

Yea – we’re getting there.  I get it, you’re impatient… so am I.  Alright – so where are we?  We know our inventory of mongodb servers… we understand how we’re going to access them via SSH and we just learned about the fact that we can create these awesome script-like things called playbooks.

In the ping example playbook, I used a section called tasks.  Tasks are where we leverage commands that ansible understands to carry out the things we want to accomplish on our inventory of servers… ok – so how then do we install MongoDB?

Ansible does for you what you are not able, or don’t want to do for yourself.  However, it does it in the same manner.  With MongoDB, specifically in the Linux world… the easiest way to install it is by installing a REPO, or an RPM repository and leveraging the YUM command to install the packages for you.

YUM satisfies all dependencies during the installation process so it makes managing your software installations a lot easier.  We could, technically, use Ansible to download the binaries, and perform a manual compile and install… sure.  But let’s leverage the power of package management to do all that for us.

In order to use YUM, we first need to define a repository.  The following is the repo definition for MongoDB Enterprise.

To use this repo without ansible, you’d copy the repo file to each of your servers, then execute yum update, and yum install mongodb-enterprise, etc.

We’re not going to that – we’re going to let Ansible do that for us.  So.  Step 1 will be to create a file called mongodb-enterprise.repo and copy in the contents from above.  Place this file in a directory called files (for neatness, of course.)

Next, let’s create the playbook we’ll use that will refer to this repo.  Here’s what mine looks like:

Let’s break this down.

Line 001: Starter YML
Line 002: Hosts designation – let’s ansble know what hosts we’re acting on.
Line 003: Remote user designation – who are we impersonating when we execute these changes on the remote host.
Line 004: Become – this is the same as sudo, essentially.  We need to make these changes as a super user.
Line 005: Tasks designator – begin block.
Line 006: What file are we using?  We are going to send this file to our remote hosts, one at a time.
Line 007: Once there, what commands will we be executing.  First, let’s do a generic update.  The equivilant of this is “yum update”.
Line 008: Next, we want to target a specific package whose state we want to change… specifically, we want the state of the package named mongodb-enterprise to be installed at the latest version.
Line 009: Now we want to install the mongodb shell commands.
Line 010: We also need gpg installed.
Line 011: Lastly, we’re going to run MongoDB in a specific directory – namely “/data”.

Let’s give it a shot.  First, place these commands in a file called playbook-replicaset-enterprise-prerequisites.yml in a directory called playbooks.

Here’s what this looks like when it runs:

If all goes according to plan, you should end up with MongoDB Enterprise installed on your hosts.

Let’s go interrogate one of the servers to make sure.

Sure enough, MongoDB is installed and ready to go.

So – now that we’ve installed it using a package manager, what about starting, stopping, restarting, etc.?  Ansible can easily accomplish these from command line – but let’s create playbooks so we have them in our arsenal.

Let’s create a new playbook called playbook-replicaset-start.yml and fill it with the following content.  Notice we’re calling mongod directly, and not relying on the service commands… you will want to examine this should you further your deployment into a production environment.

Here’s what our new mongodb start playbook looks like in action:

And now let’s verify that we’ve actually effected the expected change on our servers:

So that’s it folks, we’ve gone from zero to hero with Ansible and MongoDB.  To recap, we learned about Ansible’s inventory.  We learned how to run ansible from the command line as well as how to create scripts called playbooks.  We created a repo file, distributed that to our servers and leveraged that via ansible to install mongodb.

If this topic interests you and you’re looking to go to the next step, check out my repository on github that contains some great playbook content – some of which I wrote and a lot of which my colleague Torsten Spindler wrote.  This repo enables you to automate the process of installing MongoDB and leverages Opsmanager – registering newly installed hosts with the opsmanager console.  This will better prepare you to manage your production implementation of MongoDB.

This is just a beginning, but I hope you can see the incredible power and flexibility that Ansible can bring you.  Feel free to leave a comment or question should you have either.

You may also want to review the scripts I’ve created as a part of this post.  They can be found and freely downloaded at: https://github.com/mrlynn/deploying-mongodb-ansible.