Welcome everyone to codebeast

My name is codebeast and today I'll be taking you through this entire session on Devopsfull course

So we have designed this crash course in such a way that it starts from thebasic topics and also covers the advanced ones

So we'll be covering all the stagesand tools involved in Devops

So this is how the modules are structured

We'll start byunderstanding

What is the meaning of devops? What was the methodology before devops? Right?So all those questions will be answered in the first module

Then we are going to talkabout what is git how it works

And what is the meaning of Version Control and how wecan achieve that with the help of git, that session will be taken by codebeast

Post that I'll be teaching you how you can create really cool digital pipelines with the helpof Jenkins Maven and git and GitHub

After that

I'll be talking about the most famoussoftware containerization platform, which is docker and post that Vardhan we'll beteaching you how you can Kubernetes for orchestrating Docker container clusters

After that, Weare going to talk about configuration management using ansible and puppet

Now, both of thesetools are really famous in the market ansible is pretty trending whereas puppet is verymature it is there in the market since 2005 finally

I'll be teaching you how you canperform continuous monitoring with the help of Nagios

So let's start the session guys

Will Begin by understanding what is devops? So this is what we'll be discussing today

We'll Begin by understanding why we need devops everything exists for a reason

So we'll tryto figure out that reason we are going to see what are the various limitations thatthe traditional software delivery methodologies and how it devops overcomes all of those limitations

Then we are going to focus on what exactly is the devops methodology and what are thevarious stages and tools involved in devops

And then finally in the hands on part I willtell you how you can create a docker image how you can build it test it and even pushit onto Docker Hub in an automated fashion using Jenkins

So I hope you all are clear with theagenda

So let's move forward guys and we'll see why we need DevOps

So guys, let's startwith the waterfall model

Now before devops organizations were using this particular softwaredevelopment methodology

It was first documented in the year 1970 by Royce and was the firstpublic documented life cycle model

The waterfall model describes a development method thatis linear and sequential waterfall development has distinct goals for each phase of development

Now, you must be thinking why the name waterfall model because it's pretty similar to a waterfall

Now what happens in a waterfall once the water has flowed over the edge of the cliff

Itcannot turn back the same is the case for waterfall development strategy as well

Anapplication will go to the next stage only when the previous stage is complete

So letus focus on what are the various stages involved in waterfall methodology

So notice the diagramthat is there in front of your screen

If you notice it's almost like a waterfall oryou can even visualize it as a ladder as well

So first what happens the client gives requirementfor an application

So you gather that requirement and you try to analyze it then what happensyou design the application how the application is going to look like

Then you start writingthe code for the application and you build it when I say build it involves multiple thinkcompiling your application, you know unit testing then even it involves packaging iswell after that it is deployed onto the test servers for testing and then deployed ontothe broad service for release

And once the application is life

It is monitored


I know this small looks perfect and trust me guys

It was at that time, but think aboutit what will happen if we use it

Now fine

Let me give you a few disadvantages of thismodel

So here are a few disadvantages

So first one is once the application is in thetesting stage

It is very difficult to go back and change something that was not wellthought out in the concept stage now what I mean by that suppose you have written thecode for the entire application but in testing there's some bug in that particular applicationnow in order to remove that bug you need to go through the entire source code of the applicationwhich used to take a lot of time, right? So that is Very big limitation of waterfall modelapart from that

No working software is produced until late during the life cycle

We saw thatwhen we are discussing about various stages of what for more there are high amount ofrisk and uncertainty which means that once your product is life it is there in the marketthen if there is any bug or any downtime, then you have to go through the entire sourcecode of the application again, you have to go through that entire process of waterfallmodel that we just saw in order to produce a working software again, right? So that'show it used to take

A lot of time

There's a lot of risk and uncertainty and imagineif you have upgraded some software stack in your production environment and that led tothe failure of your application now to go back to the previous table version used toalso take a lot of time now, it is not a good model for complex and object oriented projectsand it is not suitable for the Project's where requirements are at a moderate to high riskof changing

So what I mean by that suppose your client has given you a requirement fora web application today now you have taken Own sweet time and you are in a conditionthe release the application say after one year now after one year, the market has changed

The client does not want a web application

He's looking for a mobile application now,so this type of model is not suitable where requirements are at a moderate to high riskof changing

So there's a question popped in my screen is from codebeast

She's askingso all the iteration in the waterfall model goes through all the stages

Well, there areno I tration as such codebeast

First of all, it is not agile methodology or devops

Itis waterfall model, right? There are no I trations once the stage is complete then onlyit will be good

It will be going to the next stage

So there are no I trations as suchif you're talking about the application and it is life and then there is some bug or thereis some downtime then at that time based on the kind of box, which is there in the applicationSuppose

There might be a bug because of some flawed version of a software stack installedin your production environment

Probably some upgraded version because if that your applicationis not working properly

You need to roll back to the previous table version of thesoftware stack in your production environment

So that can be one bug apart from that

Theremight be bugs related to the code in which you have to check the entire source code ofthe application again

Now if you look at it to roll back and incorporate the feedbackthat you have got is used to take a lot of time

Right? So I hope this answers your question

All right, she's finally the answer any of the questions any other doubt you have guysyou can just go ahead and ask me find so there are no questions right now

So I hope youhave understood what was the relation with waterfall model

What are the various limitationsof this waterfall model

Now we are going to focus on the next methodology that is calledthe agile methodology

Now agile methodology is a practice that promotes continuous iterationof development and testing throughout the software development life cycle of the project

So the development and the testing of an application used to happen continuously with the agilemethodology

So what I mean by that if you focus on a diagram that is there in frontof your screen, so here we get the feedback from the testing that we have done in theprevious iteration

We design the application again, then we develop it there again

Wetest it then we discover few things that we can incorporate in the application

We againdesign it develop it and there are multiple I trations involved in development and testingof a particular application cinestyle


Each project is broken up into several I trationsand all I tration should be of the same time duration and generally it is between 2 to8 weeks and at the end of each iteration of working for dr

Should be delivered

So thisis what agile methodology in a nutshell is now let me go ahead and compare this withthe waterfall model

Now if you notice in the diagram that is there in front of yourscreen, so waterfall model is pretty linear and it's pretty straight as you can see fromthe diagram that we analyze requirements

We plan it design

It build it test it

Andthen finally we deploy it onto the processor was for release, but when I talk about theagile methodology over here the design build and testing part is happening continously

We are writing the code

We are building the application

We are testing it continuouslyand there are several iterations involved in this particular stage

And once the finaltesting is done

It is then deployed onto the broad service for release, right? So agilemethodology basically breaks down the entire software delivery life cycle into small sprainsor iterations that we call it due to which the development and the testing part of thesoftware delivery life cycle used to happen continously

Let's move forward and we aregoing to focus on what are the various limitations of agile methodology the first and the biggestlimitation of agile methodology is that the deaf part of the team was pretty agile rightthe development and testing used to happen continuously

But when I talk about deploymentthen that was not continuous there were still a lot of conflicts happening between the Devonthe off side of the company the dev team wants agility

Whereas the Ops Team want stabilityand there's a very common conflict that happens and a lot of you can actually relate to itthat the code works fine in the developers laptop, but when it reaches to productionthere is some bug in the application or it does not work any production at all

So thisis because if you know some inconsistency in the Computing environment And that hascaused that and due to which the operations team and the dev team used to fight a lot

There are a lot of conflicts guys at that time happening

So agile methodology madethe deaf part of the company pretty agile, but when I talk about the off side of thecompany, they needed some solution in order to solve the problem that I've just discussedright? So I hope you are able to understand what kind of a problem I'm focusing on

Ifyou go back to the previous diagram as well so over here if you notice only the designbuild and test or you can say Development building and testing part is continuous, rightthe deployment is still linear

You need to deploy it manually on to the various productsovers

That's what you was happening in the agile methodology

Right? So the error thatI was talking about you too busy

Our application is not working fine

I mean once your applicationis life and do you do some software stack in the production environment? It doesn'twork properly now to go back and change something in the production environment used to takea lot of time

For example, you know, you have upgraded some particular software stackand because of that your application is Doll working it fails to work now to go back tothe previous table version of the software stack the operations team was taking a lotof time because they have to go through the login scripts that they have written on inorder to provision the infrastructure

So let me just give you a quick recap of thethings that we have discussed till now, we have discussed quite a lot of history

Westarted with the waterfall model the traditional waterfall model be understood what are itsvarious stages and what are the limitations of this waterfall mode? Then we went aheadand understood what exactly the design methodology and how is it different from the waterfallmodel and what are the various limitations of the agile methodology? So this is whatwe have discussed till now now we are going to look at the solution to all the problemsthat we have just discussed and the solution is none other than divorce divorce is basicallya software development strategy which Bridges the gap between the deaf side and the offsideof the company

So devops is basically a term for a group of Concepts that while not allnew half catalyze into a movement and a rapidly spreading

Well, the technical community likeany new and popular term people may have confused and sometimes contradictory impressions ofwhat it is

So let me tell you guys devops is not a technology

It is a methodology

So basically devops is a practice that equated to the study of building evolving and operatingrapidly changing systems at scale


Let me put this in simpler terms

So devops isthe practice of operations and development Engineers participating together in the entiresoftware life cycle from design through the development process to production supportand you can also say that devops is also characterized by operation staff making use many of thesame techniques as Developers for this system work

I'll explain you that how is this definitionrelevant because all we are saying here is devops is characterized by operation staffmaking use many of the same techniques as Developers for their systems work seven

Iwill explain you infrastructure as code you will understand why I am using this particulardefinition

So as you know, that devops is a software development strategy which Bridgesthe gap between the dev part in the upside of the company and helps us to deliver goodquality software in time and how this happens this happens because of various stages andtools involved in Des Moines

So here is a diagram which is nothing but an infinite Loopbecause everything happens continuously in Dev Ops guys, everything starting from codingtesting deployment monitoring everything is happening continuously, and these are thevarious tools which are involved in the devops methodologic, right? So not only the knowledgeof these tools are important for a divorce engineer, but also how to use these tools

How can I architect my software delivery lifecycle such that I get the maximum output right?So it doesn't mean that you know, if I have a good knowledge of Jenkins or gate or dockerthen I become a divorce engineer

No that is not true

You should know how to use them

You should know where to use them to get the maximum output

So I hope you have got mypoint what I'm trying to say here in the next slide

Be discussing about various stagesthat are involved in devops fine

So let's move forward guys and we are going to focuson various stages involved in divorce

So these are the various stages involved in devops

Let me just take you through all these stages one by one starting from Version Control

So I'll be discussing all of these stages one by one as well

But let me just give youan entire picture of these stages in one slide first

So Version Control is basically maintainingdifferent versions of the code what I mean by that Suppose there are multiple developerswriting a code for a particular application

So how will I know that which developer hasmade which commits at what time and which commits is actually causing the error andhow will I revert back to the previous commit so I hope you are getting my point my pointhere is how will I manage that source code suppose developer a has made a commit andthat commit is causing some error

Now how will I know the developer a has made thatcommit and at what time he made that comment and very the code was that editing happened,right? So all of these questions can be answered once you use Version Control tools like gitsubversion

XXXX of we are going to focus on getting our course

So then we have continuousintegration

So continuous integration is basically building your application continuouslywhat I mean by that suppose any developer made a change the source code a continuousintegration server should be able to pull that code

I am prepare a built now when Isay build people have this misconception of you know, only compiling the source code

It is not true guys includes everything starting from compiling your source code validatingyour source code code review unit, testing integration, testing, etc, etc

And even packagingyour application as well

Then comes continuous delivery

Now the same continuous integrationtool that we are using suppose Jenkins

Now what Jenkins will do once the applicationis built

It will be deployed onto the test servers for testing to perform, you know,user acceptance test or end user testing whether you call it there will be using tools likeselenium right for performing automation testing

And once that is done it will be then deployedonto the process servers for release, right that is called continuous deployment and herewe'll be using configuration management and Tools so this is basically to provision yourinfrastructure to provision your Prada environment and let me tell you guys continuous deploymentis something which is not a good practice because before releasing a product in themarket, there might be multiple checks that you want to do before that right? There mightbe multiple other testings that you want to do

So you don't want this to be automatedright? That's why continuous deployment is something which is not preferred after continuousdelivery

We can go ahead and manually use configuration management tools like puppetchef ansible and salts tag, or we can even use Docker for a similar purpose and thenwe can go ahead and deploy it onto the Crossovers for release

And once the application is live

It is continuously monitored by tools like Nagi Os or Splunk, which will provide therelevant feedback to the concern teams, right? So these are various stages involved in devops,right? So now let me just go back to clear if there are doubts

So this is our variousstages are scheduled various jobs schedule

So we have Jenkins here

We have a continuousintegration server

So what Jenkins will do the moment any developer makes a change inthe source code it Take that code and then it will trigger a built using tools like Mavenor and or Gradle

Once that is done

It will deploy it onto the test servers for testingfor end user testing using tools like selenium j-unit Etc

Then what happens it will automaticallytake that tested application and deploy it onto the process servers for release, right?And then it is continuously monitored by tools

Like Nagi was plunky LK cetera et cetera

So Jenkins is basically heart of devops life cycle

It gives you a nice 360 degree viewof your entire software delivery life cycle

So with that UI you can go ahead and havea look how your application is doing currently right? We're in which stage it is in rightnow testing is done or not

All those things

You can go ahead and see in the Jenkins dashboardright? There might be multiple jobs running in the Jenkins dashboard that you can seeand it gives you a very good picture of the entire software delivery life cycle

Uh, don'tworry

I'm going to discuss all of these stages in detail when we move forward

We are goingto discuss each of these stages one by one

Eating from source code management or evencall us Version Control

Now what happens in source code management? There are two typesof source code management approaches one is called centralized Version Control

And anotherone is called the distributed Version Control the source code management

Now imagine thereare multiple developers writing a code for an application if there is some bug introducedhow will we know which commits has caused that error and how will I revert back to theprevious version of the code in order to solve these issues source code management toolswere introduced and there are two types of source code management tools one is calledcentralized Version Control and another is distributed Version Control

So let's discussthe centralized Version Control first

So centralized version control system uses acentral server to store all the files and enables team collaboration

It works in asingle repository to which users can directly access a central server

So this is what happenshere guys

So every developer has a working copy the working directory

So the momentthey want to make any change in the source code

They can go ahead and make a commentin the shared repository right and they can even update their working

By you know pullingthe code that is there in the repository as well

So the repository then the diagram thatyour nose noticing indicates a central server that could be local or remote which is directlyconnected to each of the programmers workstation

As you can see now every programmer can extractor update their workstation or the data present in the repository or can even make changesto the data or committed in the repository

Every operation is performed directly on thecentral server or the central repository, even though it seems pretty convenient tomaintain a single repository, but it has a lot of drawbacks

But before I tell you thedrawbacks, let me tell you what advantage we have here

So first of all, if anyone makesa comment in the repository, then it will be a commit ID Associated to it and therewill always be a commit message

So, you know, which person has made that commit and at whattime and where in the code basically, right so you can always revert back but let me nowdiscuss few disadvantages

First of all, it is not locally available

Meaning you alwaysneed to be connected to a network to perform any action

It is always not available locally,right? So you need to be connected with the some sort of network

Basically since everythingis centralized in case of the central server getting crashed or corrupted

It will resultin losing the entire data of the project

Right? So that's a very serious issue guys

And that is one of the reasons why Industries don't prefer centralized Version Control System,that's talk about the distributed version control system

Now now these systems do notnecessary rely on a central server to store all the versions of the project file

So indistributed Version Control System, every contributor has a local copy or clone of themain repository as you can see, I'm highlighting with my cursor right now that is everyonemaintains a local repository of their own which contains all the files and metadatapresent in the main repository

As you can see then the diagram is well, every programmermaintains a local repository on its own which is actually the copy or clone of the centralrepository on their hard drive

They can commit and update the local repository without anyinterference

They can update the local repositories with new data coming from the central serverby an operation called pull and effect changes the main repository by an operation calledpush write operation called push from the local post re now

You must be thinking whatadvantage we get here

What are the advantages of distributed version control over the centralizedVersion Control now basically the act of cloning and entire repository gives you that Advantage

Let me tell you how now all operations apart from push-and-pull are very fast because thetool only needs to access the hard drive not a remote server, hence, you do not alwaysneed an internet connection committing new change sets can be done locally without manipulatingthe data on the main proposed three

Once you have a group of change sets ready

Youcan push them all at once

So what you can do is you can ask the commit to your localrepository, which is there in your local hard drive

You can commit the changes

Are youwant in the source code you can you know, once you review it and then once you havequite a lot of It's ready

You can go ahead and push it onto the central server as wellas the central server gets crashed at any point of time

The lost data can be easilyrecovered from any one of the contributors local repository

This is one very big Advantageapart from that since every contributor has a full copy of the project repository

Theycan share changes with one another if they want to get some feedback before affectingthe changes in the main repository as well

So these are the various ways in which youknow distributed version control system is actually better than a centralized versioncontrol system

So we saw the two types of phones code Management systems and I hopeyou have understood it

We are going to discuss a one source code management tool called gate,which is very popular in the market right now almost all the companies actually useget for now

I'll move forward and we'll go into focus on a source code management toola distributed Version Control tool that is called as get now before I move forward guys

Let me make this thing clear

So when I say Version Control or source code management,it's one in the same thing

Let's talk about get now now git is a distributed Version Controltool

Boards distributed nonlinear workflows by providing data Assurance for developingquality software, right? So it's a pretty tough definition to follow but it will beeasier for you to understand with the diagram that is there in front of your screen

Sofor example, I am a developer and this is my working directory right now

What I wantto do is I want to make some changes to my local repository because it is a distributedVersion Control System

I have my local repository as well

So what I'll do I'll perform a getadd operation now because of get add whatever was there in my working directory will bepresent in the staging area

Now, you can visualize the staging area as something whichis between the working directory and your local repository, right? And once you havedone get ad you can go ahead and perform git commit to make changes to your local repository

And once that is done you can go ahead and push your changes to the remote repositoryas well

After that you can even perform get pull to add whatever is there in your remoterepository to your local repository and perform get check out to our everything which wasthere in your Capacity of working directory as well

All right, so let me just repeatit once more for you guys

So I have a working directory here

Now in order to add that tomy local repository

I need to First perform get add that will add it to my staging areastaging area is nothing but area between the working directory and the local repositoryafter guitar

I can go ahead and execute git commit which will add the changes to my localrepository

Once that is done

I can perform get push to push the changes that I've madein my local repository to the remote repository and in order to pull other changes which arethere in the remote repository of the local repository

You can perform get pull and finallyget check out that will be added to your working directory as well and get more which is alsoa pretty similar command now before we move forward guys

Let me just show you a few basiccommands of get so I've already installed get in my Center is virtual machine

So letme just quickly open my Center as virtual machine to show you a few basic operationsthat you can perform with get device virtual machine, and I've told you that have alreadyinstalled get now in order to check the version of get you can just Then he'd get - - versionand you can see that I have two point seven point two here

Let me go ahead and clearmy terminal

So now let me first make a directory and let me call this as a deal breaker - repositoryand I'll move into this array core repository

So first thing that I need to do is initializethis repository as an empty git repository

So for that all I have to type here is getin it and it will go ahead and initialize this R empty directory as a local git repository

So it has been initialized now as you can see initialise empty git repository in homeand Drake I drink - report dot kit or right then so over here

I'm just going to createa file of python file

So let me just name that as a deer a card dot p y and I'm goingto make some changes in this particular files

So I'll use G edit for that

I'm just goingto write in here, uh normal print statement

Welcome to Ed Eureka close the parenthesissave it

Close it

Let me get my terminal now if I hit an LS command so I can see thatedeka dot py file is here


If you can recall from the slides, I was telling youin order to add a particular file or a directory into the local git repository first

I needto add it to my staging area and how will I do that by using the guitar? Come on

Soall I have to type here is get ad at the name of my file, which is edureka

py then herewe go

So it is done now now if I type in here git status it will give me the fileswhich I need to commit

So this particular command gives me the status status as a littletell me model files

They need to commit to the local repository

So it says when youfile has been created that is in the record or py in the state and it is present in thestaging area and I need to come at this particular Phi

So all I have to type here is git commit- M and the message that I want so I'll just type in here first commit and here we go

So it is successfully done now

So I've added a particular file to my local git repository

So now what I'm going to show you is basically how to deal with the remote repositories

So I have a remote git repository present on GitHub

So I have created a GitHub account

The first thing that you need to do is create a GitHub account and then you can go aheadand create a new repository there and then I'll tell you how to add that particular repositoryto a local git repository

Let me just go to my browser once and me just zoom in a bit

And yeah, so this is my GitHub account guys

And what I'm going to do is I'm first goingto go to this repository stab and I'm going to add one new repository

So I'll click onnew

I'm going to give a name to this repository

So whatever name that you want to give youjust go ahead and do that

Let me just write it here

Get - tutorial - Dev Ops, whatevername that you feel like just go ahead and write that I'm going to keep it public ifyou want any description you can go ahead and give that and I can also initialize itwith a readme create a posse and that's all you have to do in order to create a remoteGitHub repository now over here

You can see that there's only one read me dot MD file

So what I'm going to do, I'm just going to copy this particular SSH link and I'm goingto perform git remote add origin and the link there are just copy

I'll paste it here andhere we go

So this has basically added my remote repository to my local repository

Now, what I can do is I can go ahead and pull whatever is there in my remote repositoryto my local git repository for that

All our to type here is git pull origin master andhere we go

Set is done

Now as you can see that I've pulled all the changes

So let meclear my terminal and hit an endless command

So you'll find read me dot MD present hereright now

What I'm going to show you is basically how to push this array card or py file ontomy remote repository

So for that all I have to type here is git push origin master andhere we go

So it is done


Let me just go ahead and refresh this particular repositoryand you'll find Erica py file here

Let me just go ahead and reload this so you can seea record or py file where I've written welcome to edit a car

So it's that easy guys

Letme clear my terminal now

So I've covered few basics of get so let's move forward withthis devops tutorial and we are going to focus on the next stage which is called continuousintegration

So we have seen few basic commands of get we saw how to initialize an empty directoryinto a git repository how we can you know, add a file to the staging area and how wecan go ahead and commit in the local repository

After that

We saw how we can push the changesin the local repository to the remote repository

My repository was on GitHub

I told you howto connect to the remote repository and then how even you can pull the changes from theremote repository rights all of these things we have discussed in detail

Now, let's moveforward guys in we are going to focus on the next stage which is called continuous integration

So continuous integration is basically a development practice in which the developers are requiredto commit changes

Just the source code in a shared repository several times a day, oryou can say more frequently and every commit made in the repository is then built thisallows the teams to detect the problems early

So let us understand this with the help ofthe diagram that is there in front of your screen

So here we have multiple developerswhich are writing code for a particular application and all of them are committing code to a sharedrepository which can be a git repository or subversion repository from there the Jenkinsserver, which is nothing but a continuous integration tool will pull that code the momentany developer commits a change in the source code the moment any developer coming to changein the source code Jenkins server will pull that it will prepare a built now as I havetold you earlier as well build does not only mean compiling the source code

It includescompiling but apart from that there are other things as well

For example code review unittesting integration testing, you know packaging your application into an executable file

It can be a war file

It can be a jar file

So it happens in a continuous manner the momentany developer coming to change in the source code Jenkins server will pull that preparea bill


This is called as continuous integration

So Jenkins has various Toolsin order to perform this so it has various tools for development testing and deploymentTechnologies

It has well over 2,500 plugins

So you need to install that plug-in and youcan just go ahead and Trigger whatever job you wanted with the help of Jenkins

It isoriginally written in Java

Right and let's move forward and we are going to focus oncontinuous delivery now, so continuous delivery is nothing but taking continuous integrationto The Next Step

So what are we doing in a continuous manner or in an automated fashion?We are taking this build application onto the test server for end user testing or unitor user acceptance test, right? So that is basically what is continuous delivery

Solet us just summarize containers delivery again moment

Any developers makes a changein the source code

Jenkins will pull that code prepare a built once build a successful

It will take the build application and Jenkins will deploy it onto the test server for enduser testing or user acceptance test

So this is basically what continuous delivery is ishappens in a continuous fashion

So what advantage we get here? Basically if they the build failurethen we know which commits has caused that error and we don't need to go through theentire source code of the application similarly for testing even if any bug appears in testingis well, we know which comment has caused that are Ernie can just go ahead and you knowhave a look at that particular comment instead of checking out the entire source code ofthe application

So they basically this system allows the team to detect problems early,right as you can see from the diagram as web

You know, if you want to learn more aboutJenkins, I'll leave a link in the chat box

You can go ahead and refer that and peopleare watching it on YouTube can find that link in the description box below now, we're goingto talk about continuous deployment

So continuous deployment is basically taking the applicationthe build application that you have tested and deploying that onto the process serversfor release in an automated fashion

So once the application is tested it will automaticallybe deployed on to the broad service for release

Now, this is something not a good practiceas I've told you earlier as well because there might be certain checks that you need to donow to release your software in the market

Are you might want to Market your productbefore that? So there are a lot of things that you want to do before deploying yourapplication

So it is not advisable or a good practice to you know, actually automaticallydeploying your application onto the processor which for release so this is basically continuousintegration delivery and deployment any questions

You have guys you can ask me

All right, soDorothy wants me to repeat it

Once more sure jovial do that

Let's start with continuousintegration

So continuous integration is basically committing the changes in the sourcecode more frequently and every commit will then be built using a Jenkins server, rightor any continuous integration server

So this Jenkins what it will do it will trigger abuild the moment any developer commits a change in the source code and build includes of compilingcode review unit, testing integration testing packaging and everything

So I hope you areclear with what is continuous integration

It is basically continuously building yourapplication, you know, the moment any developer come in to change in the source code

Jenkinswill pull that code and repairable

Let's move forward and now I'm going to explainyou continuous delivery now incontinence delivery the package that we Created here the war ofthe jar file of the executable file

Jenkins will take that package and it will deployit onto the test server for end user testing

So this kind of testing is called the enduser testing or user acceptance test where you need to deploy your application onto aserver which can be a replica of your production server and you perform end user testing oryou call it user acceptance test

For example in my application if I want to check all thefunctions right functional testing if I want to perform functional testing of my application,I will first go ahead and check whether my search engine is working then I'll check whetherpeople are able to log in or not

So all those functions of a website when I check or anapplication and I check is basically after deploying it on to apps over right? So that'ssort of testing is basically what is your functional testing or what? I'm trying torefer here next up

We are going to continuously deploy our application onto the process serversfor release

So once the application is tested it will be then deployed onto the broad servicefor release and I've told you earlier is well, it is not a good practice to deploy your applicationcontinuously or in an automated fashion

So guys you have discussed a lot about Jenkins

How about I show you How Jenkins UI looks like and how you can download plugins on allthose things

So I've already installed Jenkins in my Center is virtual machine

So let mejust quickly open

My Center is virtual machine

So guys, this is my Center is virtual machineagain and over here

I have configured my Jenkins on localhost port 8080 / Jenkins andhere we go

Just need to provide the username and password that you have given when youare installing Jenkins

So this is how Jenkins looks like guys over here

There are multipleoptions

You can just go and play around with it

Let me just take you through a few basicoptions that are there

So when you click on new item, you'll be directed to a pagewhich will ask you to give a name to your project

So give whatever name that you wantto give then choose a kind of project that you want

Right and then you can go aheadand provide the required specifications required configurations for your project

Now whenI was talking about plugins, let me tell you how you can actually install plug-ins

Soyou need to go to manage and kins and here's a tab that you'll find manage plugins

Inthis tab, you can find all the updates that are there for the plugins that you have alreadyinstalled in the available section

You'll find all the available plugins that Jenkinssupport so you can just go ahead and search for the plug-in that you want to install justcheck it and then you can go ahead and install it similarly

The plug-ins that are installedwill be found in the install Tab and then you can go ahead and check out the advancedtab as well

So this is something different

Let's not just focus on this for now

Letme go back to the dashboard and this is basically one project that I've executed which is calledAda Rekha Pipeline and this blue color symbolizes and it was successful the blue Colour ballmeans it was successful

That's how it works guys

So I was just giving you a tour to theJenkins dashboard will actually execute the Practical as well

So we'll come back to itlater

But for now, let me open my slides in will proceed with the next stage in thedevops life cycle

So now let's talk about configuration management

So what exactlyis configuration management, so now let me talk about few issues with the deploymentof a particular application or provisioning of the server's so basically what happens,you know, I've been My application but when I deployed onto the test servers or onto theprocess servers, there are some dependency issues because of his my application is notworking fine

For example in my developers laptop

There might be some software stackwhich was upgraded but in my prod and in the test environment, they're still using theoutdated version of that software side because of which the application is not working fine

This is just one example apart from that what happens when your application is life andit goes down because of some reason and that reason can be you have upgraded the softwarestack

Now, how will you go back to the previous table version of that software stack

So thereare a lot of issues with you know, the admin side of the company the upside of the companywhich were removed the help of configuration management tools

So, you know before Edmondsused to write these long scripts in order to provision the infrastructure whether it'sthe test environment of the prod environment of the dev environment, so they utilize thoselong scripts, right which is prone to error plus

It used to take a lot of time and apartfrom that the Edmund who has written that script

No one else can actually recognizewhat's the problem with it once if you have to debug it, so there are a lot of problemsat work

Are with the admin side or the Absurd the company which were removed by the helpof configuration management tools and when very important concept that you guys shouldunderstand is called infrastructure as code which means that writing code for your infrastructure

That's what it means suppose if I want to install lamp stack on all of these three environmentswhether it's devtest abroad I will write the code for installing lamp stack in one centrallocation and I can go ahead and deploy it onto devtest and prom so I have the recordof the system State president my one central location, even if I upgrade to the next version,I still have the recorded the previous stable version of the software stack, right? So Idon't have to manually go ahead and you know write scripts and deployed onto the nodesthis is that easy guys

So let me just focus on few challenges at configuration managementhelps us to overcome

First of all, it can help us to figure out which components tochange when requirements change

It also helps us in redoing an implementation because therequirements have changed since the last implementation and very important Point guys that it helpsus to revert to a Previous version of the component if you have replaced with a newbut the flawed version now, let me tell you the importance of configuration managementthrough a use case now the best example I know is of New York Stock Exchange a softwareglitch prevented the NYC from Trading stocks for almost 90 minutes this led to millionsof dollars of loss a new software installation caused the problem that software was installedon 8 of its twenty trading Terminals and the system was tested out the night before howeverin the morning it failed to operate on the a term ends

So there was a need to switchback to the old software

Now you might think that this was a failure of nyc's configurationmanagement process, but in reality, it was a success as a result of proper configurationmanagement NYC recovered from that situation in 90 minutes, which was pretty fast havethe problem continued longer the consequences would have been more severe guys

So I hopeyou have understood its importance

Now, let's focus on various tools available for configurationmanagement

So we have multiple tools like Papa Jeff and silence

Stack I'm going tofocus on pop it for now

So pop it is a configuration management tool that is used for deployingconfiguring and managing servers

So, let's see, what are the various functions of puppet

So first of all, you can Define distinct configurations for each and every host and continuously checkand confirm whether required configuration is in place and is not altered on the host

So what I mean by that you can actually Define distinct configuration for example in my oneparticular node

I need this office

I can another node

I need this office stack soI can you know, defined distinct configurations for different nodes and continuously checkand confirm whether the required configuration is in place and is not alter and if it isaltered pop, it will revert back to the required configurations

This is one function of puppet

It can also help in Dynamic scaling up and scaling down of machines

So what will happenif in your company there's a big billion day sale, right and you're expecting a lot oftraffic

So at that time in order to provision more servers probably today our task is toprovision 10 servers and tomorrow you might have two revisions

Jim's right

So how willyou do that? You cannot go ahead and do that manually by writing scripts

You need toolslike puppet that can help you in Dynamic scaling up and scaling down of machines

It providescontrol over all of your configured machines

So a centralized change gets propagated toall automatically so it follows a master-slave architecture in which the slaves will pullthe central server for changes made in the configuration

So we have multiple nodes therewhich are connected to the master

So they will poll they will check continuously

Isthere any change in the configuration happened the master the moment any change happen itwill pull that configuration and deploy it onto that particular node

I hope you're gettingmy point

So this is called pull configuration and push configuration

The master will actuallypush the configurations on to the nose which happens in ansible and salts that but doesnot happen in pop it in Chef

So these two tools follow full configuration and an smellinsalts that follows push configuration in which these configurations are pushed onto the nodesand here in chef and puppet

The nodes will pull that configurations

They keep on checkingthe master at regular intervals and if there's any change in the configuration It'll pullit now

Let me explain you the architecture that is there in front of your screen

Sothat is basically a typical puppet architecture in which what happens you can see that there'sa master/slave architecture here is our puppet master and here is our puppet slave now thefunctions which are performed in this architecture first, the puppet agent sends the fact tothe puppet master

So this puppet slave will first send the fact to the Puppet Master factswhat our Fox basically they are key value data appears

It represents some aspects ofslave states such as its IP address up time operating system or whether it's a virtualmachine, right? So that's what basically facts are and the puppet master uses a fact to compilea catalog that defines how the slaves should be configured


What is the catalog itis a document that describes a desired state for each resource that Puppet Master manages

Honestly, then what happens the puppet slave reports back to the master indicating thatconfiguration is complete and which is also visible in the puppet dashboard

So that'show it works guys

So let's move Forward and talk about containerization

So what exactlyis containerization so I believe all of you have heard about virtual machines? So whatare containers containers are nothing but the lightweight alternatives to Virtual machines

So let me just explain that to you

So we have Docker containers that will contain thebinaries and libraries required for a particular application

And that's when we call it

Youknow, we have containerized a particular application

Right? So let us focus on the diagram thatis there in front of your screen

So here we have host operating system on top of whichwe have Docker engine

We have a No guest operating system here guys

It uses the hostoperating system and we're learning to Containers container one will have application one andit's binaries in libraries the container to will have application to and it's binariesand libraries

So all I need in order to run my application is this particular containeror this particular container? Because all the dependencies are already present in thatparticular container

So what is basically a container it contains my application thedependencies of my application

The binary is Ivory is required for that application

Is that in my container nowadays? If you must have noticed that even you want to installsome software you will actually get ready to use Docker container, right? That is thereason because it's pretty lightweight when you compare it with virtual machines, right?So let me discuss a use case how you can actually use Docker in the industry

So suppose youhave some complex requirements for your application

It can be a microservice

It can be a monolithicapplication anything

So let's just take microservice

So suppose you have complex requirements foryour microservice your you have written the dockerfile for that with the help of thisDocker 5

I can create a Docker image

So Docker image is nothing but you know a templateyou can think of it as a template for your Docker container, right? And with the helpof Docker image, you can create as many Docker containers as you want

Let me repeat it oncemore so we have written the complex requirements for a micro service application in an easyto write Docker file from there

We have created a Docker image and with the help of Dockerimage we can build as many containers as we want

Now that Docker image I can upload thatonto Docker Hub, which is nothing

Butter git repository of Docker images we can havepublic repositories can have private repositories e and from Docker Hub any team beat staginga production can pull that particular image and prepare as many containers as they want

So what advantage we get here, whatever was there in my developers laptop, right? TheMicrosoft is application

The guy who has written that and the requirement for thatmicrobes obvious application

So that guy's basically a developer and because he's onlydeveloping the application

So whatever is there in my developers laptop I have replicatedin my staging as well as in a production

So there's a consistent Computing environmentthroughout my software delivery life cycle

I hope you are getting my point

So guys,let me just quickly brief you again about what exactly a Docker containers so just visualizecontainer as actually a box in which our application is present with all its dependencies exceptthe box is infinitely replicable

Whatever happens in the Box stays in the Box unlessyou explicitly take something out or put something in and when it breaks you will just throwit away and get a new What so containers usually make your application easy to run on differentcomputer

Ideally the same image should be used to run containers in every environmentstage from development to production

So that's what basically Docker containers are

So guys

This is my sent to us virtual machine here again, and I've already installed docker

So the first thing is I need to start Docker for that

I'll type system CTL start docker

Give the password

And it has started successfully

So now what I'm going to do, there are fewimages which are already there in Docker up which are public images

You can pull it atanytime you want

Right? So you can go ahead and run that image as many times as you want

You can create as many containers as you want

So basically when I execute the command ofpulling an image from dog a rabbit will try to First find it locally whether its presentor not and if it is present then it's well and good

Otherwise, we'll go ahead and pullit from the docker Hub

So right so before I move forward, let me just show you how dr

Of looks like If you have not created an account and Dock and have you need to go and do thatbecause for executing a use case you have to do is it's free of cost

So this is ourdoctor of looks like guys and this is my repository that you can notice here

Right? I can goahead and search for images here as well

So for example, if I want to search for Hadoopimages, which I believe one of you asked so you can find that we have Hadoop images presenthere as well

Right? So these are nothing but few images that are there on Docker Hub

So I believe now I can go back to my terminal and execute your basic Docker commands

Sothe first thing that I'm going to execute is called Docker images which will give thelist of all the images that I have in my local system

So I have quite a lot of images youcan see right this is the size and and all those things when it was created the image

This is called the image ID, right? So I have all of these things displayed on my console

Let me just clear my terminal now what I'm going to do, I'm going to pull an image rights

All I have to type here is the awkward pull for example if I want to pull an Ubuntu image

Just type in here Docker pull open to and here we go

So it is using default tag latest

So tag is something that I'll tell you later party at will provide the default tag latestall the time

So it is pulling from the docker Hub right now because it couldn't find itlocally

So download is completed is currently extracting it

Now if I want to run a container,all I have to type here is to occur and - IIT Ubuntu or you can type the image ideas

Well,so I am in the Ubuntu container

So I've told you how you can see the various Docker imagesof told you how you can pull an image from Docker Hub and how you can actually go aheadand run a container and you're going to focus on continuous monitoring now, so continuousmonitoring tools resolve any system errors, you know, what kind of system errors low memoryunreachable server, etc, etc

Before they have any negative impact on your businessproductivity

Now, what are the reasons to use continuous monitoring tools? Let me tellyou that it detects any network or server problems

It can determine the root causeof any issue

It maintains the security and availability of the services and also monitorsin troubleshoot server performance issues

It also allows us to plan for infrastructureupgrades before outdated system cause failures and it can respond to issues of the firstsign of problem and let me tell you guys these tools can be used to automatically fix problemswhen they are detected as well

It also ensures it infrastructure outages have a minimal effecton your organization's bottom line and can monitor your entire infrastructure and businessprocesses

So what is continuous monitoring it is all about the ability of an organizationto detect report respond contain and mitigate that acts that occur on its infrastructureor on the software

So basically we have to monitor the events on the ongoing basis anddetermine what level of risk

We are experiencing

So if I have to summarize continuous monitoringin one definition, I will say it is the integration of an organization security tools

So we havedifferent security tools in an organization the integration of those tools the aggregationnormalization and correlation of the data that is produced by the security tools rightnow

It happens the data that has been produced the analysis of that data based on the organization'srisk goals and threat knowledge and near real-time response to the risks identified is basicallywhat is continuous monitoring and this is a very good saying guys if you can't measureit, you can't manage it

I hope you know what I'm talking about

Now, there are multiplecontinuous monitoring tools available in the market

We're going to focus on nagas nowgive us is used for continuous monitoring of systems application services and businessprocesses in a devops culture, right and in the event of failure nagas can alert technicalstaff of the problem allowing them to begin the mediation process before outages affectbusiness processes and users or Customers so with nagas you don't have to explain why19 infrastructure outage affect your organization's bottom line

So let me tell you how it works

So I'll focus on the diagram that is there in front of your screen

So now I give isruns on a server usually as a Daemon or a service it periodically runs plugins residingon the same server, they contact holes or servers on your network so you can see itin the diagram as well

It periodically runs plugins residing on the same server

Theycontact horse or servers on your network or on the Internet or Source overs, which canbe locally present or can be remotely present as well

One can view the status informationusing the web interface

You can also receive email or SMS notification if something happens,so now gives them and behaves like a scheduler that runs out in scripts at certain moments

It stores the results of those scripts and we'll run other scripts if these results changenow what our plugins plugins are compiled executables or scripts that can be run froma command line to check the status of a host or service

So now uses the results from theplugins

Mine the current status of the host and services on your network

So what happenedactually in this diagram now your server is running on a host and plugins interact withlocal or remote host right


These plugins will send the information to the schedulerwhich displays that in the gy that's what is happening guys

All right, so we have discussedall the stages

So let me just give you a quick recap of what all things we have discussedfirst

We saw what was the methodology before devops? We saw the waterfall model

What wereits limitations then we understood the agile model and the difference between the waterfalland agile methodology

And what are the limitations of agile methodology then we understood howdevops overcomes all of those limitations in what exactly is the worms

We saw the variousstages and tools involved in devops starting from Version Control

Then we saw continuousintegration

Then we saw countenance delivery

Then we saw countenance deployment

Basically,we understood the difference between integration delivery and deployment then we saw what isconfiguration management and containerization and finally explained continuous monitoring,right? So in between I was even switching back to my virtual machine where a few toolsalready installed and I was telling you a few Basics about those tools now comes themost awaited topic of today's session which is our use case

So let's see what we aregoing to implement in today's use case

So this is what we'll be doing

We have git repository,right? So developers will be committing code to this git repository

And from there

Jenkinswill pull that code and it will first clone that repository after cloning that repositoryit will build a Docker image using a Docker file

So we have the dockerfile will use thatto build an image

Once that image is built

We are going to test it and then push it ontoDocker Hub as I've told you what is the organ of is nothing but like a git repository ofDocker images

So this is what we'll be doing

Let me just repeat it once more so developerswill be committing changes in the source code

So the moment any developers commit to changein the source code Jenkins will clone the entire git repository

It will build a Dockerimage based on a Docker file that will create and from there

It will push the docker imageonto the docker Hub

This will happen automatically

The click of a button

So what I'll do iswe'll be using will be using gate Jenkins and Docker

So let me just quickly open myVirtual Machine and I'll show you that so what our application is all about

So we arebasically what creating a Docker image of a particular application and then pushingit onto Docker Hub in an automated fashion

And our code is written in the GitHub repository

So what is it application? So it's basically a Hello World server written with node

Sowe have a main dot JS

Let me just go ahead and show you on my GitHub repository

Letme just go back

So this is how our application looks like guys we have main dot J's rightapart from that

We have packaged or Json for a dependencies

Then we have Jenkins fileand dockerfile Jenkins file

I'll explain it to you what we are going to do with it

But before that let me just explain you few basics of Docker file and how we can builda Docker image of this particular

Very basic node

js application

First thing is writinga Docker file now to be able to build a Docker image with our application

We will need aDocker file

Yeah, right you can think of it as a blueprint for Docker

It tells Dockerwhat the contents in parameters of our image should be so Docker images are often basedon other images, but before that, let me just go ahead and create a Docker file for you

So let me just first clone this particular Repository

So let me go to that particulardirectory first

It's Darren downloads

Let me unzip this first unzip divorce - tutorialand let me hit an LS command

So here is my application present

So I'll just go to thisparticular devops - tutorial - master and let me just say my terminal let us focus onwhat all files we have

We have dockerfile

Let's not focus on Jenkins file at all fornow, right we have dockerfile

We have main dot J's package dot Json read me dot MD andwe have test dot J's

So I have a Docker file with the help of which I will be creatinga Docker image, right? So let me just show you what I have written in this Docker filebefore this

Let me tell you that Docker images are often based on other images right forthis example

We are basing our image on the official node Docker image

So this line thatyou are seeing is basically to base our application on the official node Docker image

This makesour job easy and our dockerfile very very short guys

So the in a hectic task of installingnode, and it's dependencies in the image is already done in our basement

So we'll justneed to include our application

Then we have set a label maintainer

I mean, this is optionalif you want to do it

Go ahead

If you don't want to do it, it's still fine

There's ahealth check which is basically for Docker to be able to tell if the server is actuallyup or not

And then finally we are telling Docker which Port ask server will run on right?So this is how we have written the dockerfile

Let me just go ahead and close this and nowI'm going to create an image using this Docker file

So for that all I have to type hereis sudo docker Bell slash home slash Edureka downloads devops - tutorial basically thepath to my dockerfile and here we go need to provide the sudo password

So had I startednow and is creating an image for me the docker image and it is done it successfully builtand this is my image ID, right so I can just go ahead and run this as well

So all I haveto type here is Docker Run - it and my image ID and here we go

So it is listening at Port8000

Let me just stop it for now

So I've told you how you can create an image usingDocker file right now

What I'm going to do, I'm going to use Jenkins in order to clonea git repository then build an image and then perform testing and finally pushing it ontoDocker Hub my own tokra profile

All right, but before that what we need to do is we needto tell Jenkins what our stages are and what to do in each one of them for this purpose

We will write Jenkins pipeline specification in on Jenkins file

So let me show you howthe Jenkins file looks like just click on it

So this is what I have written in my Jenkinsfile, right? That's pretty self-explanatory first

I've defined my application

I meanjust clone the repository that I have then build that image

This is the target I'm usinga draca one, which is username

And Erica is the repository name rights built that imagethen test it

So we are just going to print test passed and then finally push it ontoDocker Hub, right? So this is the URL of Docker Hub and my credentials are actually savedin Jenkins in Docker Hub credentials

So, let me just show you how you can save thosecredentials

So go to the credentials tab, so here you need to click on system and clickon global credentials

Now over here, you can go ahead and click on update and you needto provide your username your password and your doctor have credential ID that whateveryou gonna pass there, right? So, let me just type the password again

All right

Now weneed to tell Jenkins two things where to find our code and what credentials to use to publishthe docker image, right? So I've already configured my project

Let me just go ahead and showyou what I have written there

So the first thing is the name of my project right whichI was showing you when you create a new item over there

There's an option called whereyou need to give the name of your project and I've chosen pipeline project

So if Ihave to show you the pipeline project you can go to new item

And this is what I'vechosen that the kind of project and then I have clicked on Bill triggers

So basicallythis will pull my CM the source code management repository after every minute Whenever thereis a change in the source code will pull that and it will repeat the entire process afterevery minute then Advanced project options are selected the pipeline script from SCMhere either you can write pipeline script directly or you can click on Pipeline scriptfrom source code management that kind of source code management is get then I've providedthe link to my repository and that's all I have done now when I scroll down there's nothingelse I can just click on apply and Save So I've already build this project one

So letme just go ahead and do it again

All right side

I started first

It will clone the repositorythat I have

You can find all the logs

Once you click on this blue color ball and youcan find the logs here as well

So once you click here, you'll find it over here as well

And similarly the logs are present here also, so now I we have successfully build our image

We have tested it now

We are pushing it onto Docker hub

So we are successfully pushedour image onto Docker Hub as well

Now if I go back to my profile and I go to my repositoryhere

So you can find the image is already present here have actually pushed it multipletimes

So this is how you will execute the Practical

It was very easy guys

So let mejust give you a quick recap of all the things we have done first

I told you how you canwrite a Docker file in order to create a Docker image of a particular application

So we werebasing our image on the official node image of present of the docker Hub, right whichalready contains all the dependencies and it makes a Docker file looks very small afterthat

I build an image using the dockerfile then I explain to you how you can use Jenkinsin order to automate the task of cloning a repository then building a Docker image testingthe docker image and then finally uploading the add-on to the docker Hub

We did thatautomatically with the help of Jenkins a told you where you need to provide the credentialswhat our tags how you can write Jenkins file the next part of the use cases different teamsbeat staging and production can actually pull the image that we have uploaded onto DockerHub and can run as many containers as you want

Hey everyone, this is Reyshma from Edurekaand today's tutorial

We're going to learn about git and GitHub

So without any furtherAdo, let us begin this tutorial by looking at the topics that we'll be learning today

So at first we will see what is Version Control and why do we actually need Version Controlafter that? We'll take a look at the different version control tools and then we'll see allabout GitHub and get lots of taking account a case study of the Dominion Enterprises abouthow they're using GitHub after that

We'll take a look at the features of git and finallywe're going to use all the git commands to perform all the get operations

So this isexactly what we'll be learning today

So we're good to go

So let us begin with the firsttopic

What is Version Control? Well, you can think of Version Control as the management

System that manages the changes that you make in your project till the end the changes thatyou make might be some kind of adding some new files or you're modifying the older filesby changing the source code or something

So what the version control system does isthat every time you make a change in your project? It creates a snapshot of your entireproject and saves it and these snapshots are actually known as different versions

Nowif you're having trouble with the word snapshot just consider that snapshot is actually theentire state of your project at a particular time

It means that it will contain what kindof files your project is storing at that time and what kind of changes you have made

Sothis is what a particular version contains now, if you see the example here, let's saythat I have been developing my own website

So let's say that in the beginning

I justhad only one web page which is called the index dot HTML and Few days

I have addedanother webpage to it, which is called about dot HTML and I have made some modificationsin the about our HTML by adding some kind of pictures and some kind of text

So, let'ssee what actually the Version Control System stores

So you'll see that it has detectedthat something has been modified and something has been created

For example, it is storingthat about dot HTML is created and some kind of photo is created or added into it and let'ssay that after a few days

I have changed the entire page layout of the about dot HTMLpage

So again, my version control system will detect some kind of change and will saythat some about duration T

Ml has been modified and you can consider all of these three snapshotsas different versions

So when I only have my index dot HTML webpage and I do not haveanything else

This is my version 1 and after that when I added another web page, this isgoing to be a version 2 and after have The page layout of my web page

This is my version3

So this is how a Version Control System stores different versions

So I hope thatyou've all understood what is a version control system and what are versions so let us moveon to the next topic and now we'll see why do we actually need Version Control? Becauseyou might be thinking that why should I need a Version Control? I know what the changesthat I have made and maybe I'm making this changes just because I'm correcting my projector something, but there are a number of things because of why we need Version Control n solet us take a look at them one by one

So the first thing that version control systemavails us is collaboration

Now imagine that there are three developers working on a particularproject and everyone is working in isolation or even if they're working in the same sharedfolder

So there might be conflicts sometimes when each one of them are trying to modifythe same file

Now, let's say they are working in isolation

Everyone is minding their ownbusiness

Now the developer one has made some changes XYZ in a particular application andin the same application the developer to has made some kind of other changes ABC and theyare continuing doing that same thing

They're making the same modifications to the samefile, but they're doing it differently

So at the end when you try to collaborate orwhen you try to merge all of their work together, you'll come up with a lot of conflicts andyou might not know who have done what kind of changes and this will at the end end upin chaos

But with Version Control System, it provides you with a shared workspace andit continuously tells you who has made what kind of change are what has been changed

So you'll always get notified if someone has made changed in your project

So with VersionControl System a collaboration is available tween all the developers and you can visualizeeveryone's work properly and as a result your project will always evolve as a whole fromthe start and it will save a lot of time for you because there won't be much conflictsbecause obviously if the developer a will see that he has already made some changeshe won't go for that right because he can carry out his other work

You can make someother changes without interfering his work

Okay, so we'll move on to the next reasonfor what I we need Version Control System

And this is one of the most important thingsbecause of why we need Version Control System

I'll tell you why now

The next reason isbecause of storing versions because saving a version of your project after you have madechanges is very essential and without a Version Control System

It can actually get confusingbecause there might be some kind of questions that will arise in your mind when you aretrying to save a version the first question might be how much would you save would youjust save the entire project or would you just save the changes that you made now? Ifyou only save the changes it'll be very hard for you to view the whole project at a time

And if you try to save the entire project at every time there will be a huge amountof unnecessary and redundant data lying around because you'll be saving the same thing thathas been remaining unchanged again

And again, I will cover up a lot of your space and afterthat they're not the problem comes that

How do I actually named this versions now? Evenif you are a very organized person and you might actually come up with a very comprehensivenaming scheme, but as soon as your project starts varying and it comes to variance thereis a pretty good chance that you'll actually lose track of naming them

And finally themost important question

Is that how do you know what exactly is different between theseversions now you ask me that? Okay

What's the difference between version 1 and version2 what exactly was changed you need to remember or document them as well

Now when you havea version control system, you don't have to worry about any of that

You don't have toworry about how much you need to save

How do you name them? Are you have to you don'thave to remember that what exactly is different different between the versions because theVersion Control System always acknowledges that there is only one project

So when you'reworking on your project, there is only one version on your disk

And everything elseall the changes that they've made in the past are all neatly packed inside the Version ControlSystem

Let us go ahead and see the next reason now version control system provides me witha backup

Now the diagram that you see here is actually the layout of a particul distributedVersion Control System here

You've got your central server where all the project filesare located and apart from that every one of the developers has a local copy of allthe files that is present in the central server inside their local machine and this is knownas the local copies

So what the developers do is that every time they start coding atthe start of the day, they actually fetch all the project files from the central serverand store it in the local machine and after they are done working the actually transferall the files back into the central server

So at every time you'll always Is have a localcopy in your local machine at times of Crisis

Like maybe let's say that your central servergets crashed and you have lost all your project files

You don't have to worry about thatbecause all the developers are maintaining a local copy the same exact copy of all thefiles that is related to your project that is present in the central server

Is therein your local machine and even if let's say that maybe this developer has not updatedhis local copy with all the files if he loses and the central servers gets crashed and thedeveloper has not maintained its local copy is always going to be someone who has alreadyupdated it because obviously there is going to be huge number of collaborators workingon the project

So even a particular developer can communicate with other developers andget fetch all the project files from other developers local copy as well

So it is veryreliable when you have a version control system because you're always going to have a backupof all

You're fired

So the next thing and which Version Control helps us is to analyzemy project because when you have finished your project you want to know that how yourproject has actually evolved so that you can make an analysis of it and you can know thatwhat could you have done better or what could have been improved in your project? So youneed some kind of data to make an analysis and you want to know that what is exactlychanged and when was it change and how much time did it take and Version Control Systemactually provides you with all the information because every time you change something versioncontrol system provides you with the proper description of what was changed

And whenwas it changed you can also see the entire timeline and you can make your analysis reportin a very easy way because you have got all the data present here

So this is how a versioncontrol system helps you to analyze your project as well

So let us move ahead and let us takea look

Add the Version Control tools because in order to incorporate version control systemin your project, you have to use a Version Control tool

So let us take a look at whatis available

What kind of tools can I use to incorporate version control system

Sohere we've got the four most popular version control system tools and they are get andthis is what we'll be learning in today's tutorial will be learning how to use git andapart from get you have got other options as well

You've got the Apache subversionand this is also popularly known as SBN SVN and CVS, which is the concurrent version systems

They both are a centralized Version Control tool

It means that they do not provide allthe developers with a local copy

It means that all the contributors are all the collaboratorsare actually working directly with the central repository only they don't maintain localcopy and Kind of actually becoming obsolete because everyone prefers a distributed VersionControl System where everyone has an okay copy and Mercurial on the other hand is verysimilar to get it is also a distributed Version Control tool but we'll be learning all aboutget here

That's what I get is highlighted in yellow

So let's move ahead

So this isthe interest over time graph and this graph has been collected from Google Trends andthis actually shows you that how many people have been using what at what time so the blueline here actually represents get the green is SVN

The yellow is Mercurial and the redis CVS

So you can see that from the start get has always been the most popular versioncontrol tool as compared to as bian Mercurial and CVS and it has always kind of been a badday for CVS, but get has always been popular

So why not use get right? So there's nothingto say much about That a yes and a lot of my fellow attendees agree with me

We shouldall use get and we're going to learn how to use get in this tutorial

So let us move aheadand let us all learn about git and GitHub right now

So the diagram that you see onmy left is actually the diagram which represents that what exactly is GitHub and what exactlyis get now I've been talking about a distributed version control system and the right handside diagram actually shows you the typical layout of a distributed Version Control Systemhere

We've got a central server or a central repository now, I'll be using the word repositorya lot from now on just so that you don't get confused

I'll just give you a brief overview

I'll also tell you in detail

What is the repository and I'll explain you everythinglater in this tutorial, but for now just consider repository as a data space where you storeall the project files any kind of files that is related

Your project in there, so don'tget confused when I say rip off the tree instead of server or anything else

So in a DistributiveVersion Control System, you've got a central repository and you've got local repositoriesas well and every of the developers at first make the changes in their local repositoryand after that they push those changes or transfer those changes from into the centralrepository and also the update their local repositories with all the new files that arepushed into the central repository by an operation called pull

So this is how the fetch datafrom Central repository

And now if you see the diagram again on the left, you'll knowthat GitHub is going to be my central repository and get is the tool that is going to allowme to create my local repositories

Now, let me exactly tell you what is GitHub

Now peopleactually get confused between git and GitHub they I think that it's kind of the same thingmaybe because of the name they sound very alike

But it is actually very different

Well git is a Version Control tool that will allow you to perform all these kind of operationsto fetch data from the central server and to just push all your local files into thecentral server

So this is what get will allow you to do it is just a Version Control Managementtool

Whereas in GitHub

It is a code hosting platform for Version Control collaboration

So GitHub is just a company that allows you to host your central repository in a remoteserver

If you want me to explain in easy words, you can consider GitHub as a socialnetwork, which is very much similar to Facebook

Like only the differences that this is a socialnetwork for the developers

We're in Facebook, you're sharing all your photos and videosor any kind of statuses

What the developers doing get have is that they share their codefor everyone to see their projects either code about how they have worked on

So thatis GitHub

There are certain advantages of a distributed Version Control System

Well,the first thing that I've already discussed was that it provides you with the backup

So if at any time your central server crashes, everyone will have a backup of all their filesand the next reason is that it provides you with speed because Central servers typicallylocated on a remote server and you have to always travel over a network to get accessto all the files

So if at sometimes you don't have internet and you want to work on yourproject, so that will be kind of impossible because you don't have access to all yourfiles, but with a distributed Version Control System, you don't need internet access alwaysyou just need internet when you want to push or pull from the central server apart fromthat you can work on your own your files are all inside your local machine so fetchingit

In your workspace is not a problem

So that are all the advantages that you get witha distributed version control system and a centralized version control system cannotactually provide you that so now let us take a look at a GitHub case study of the DominionEnterprises

So Dominion Enterprises is a leading marketing services and Publishingcompany that works across several Industries and they have got more than 100 offices worldwide

So they have distributed a technical team support to develop a range of a website andthey include the most popular websites like for and

com volts

com homes


All the DominionEnterprises websites actually get more than tens of million unique visitors every monthand each of the website that they work on has a separate development team and all ofthem has got a unique needs and You were close of their own and all of them were workingindependently and each team has their own goals their own projects and budgets, butthey actually wanted to share the resources and they wanted everyone to see what eachof the teams are actually working on

So basically they want to transparency

Well the neededa platform that was flexible enough to support a variety of workflows

And that would provideall the Dominion Enterprises development around the world with a secure place to share codeand work together and for that they adopted GitHub as the platform

And the reason forchoosing GitHub is that all the developers across the Dominion Enterprises, we're alreadyusing github


So when the time came to adopt a new version control platform, so obviouslyGitHub Enterprise definitely seemed like a very intuitive choice and because everyoneall the developers were also familiar with GitHub

So the learning curve Was also verysmall and so they could start contributing code right away into GitHub and with GitHuball the developer teams

All the development teams were provided access to when they canalways share their code on what they're working on

So at the end everyone has got a verysecure place to share code and work together

And as Joe Fuller, the CIO of dominion Enterprisessays that GitHub Enterprise has allowed us to store our company source code in a centralcorporately control system and Dominion Enterprises actually manages more than 45 websites, andit was very important for dominion and the price to choose a platform that made workingtogether possible

And this wasn't just a matter of sharing Dominion Enterprises opensource project on GitHub

They also had to combat the implications of storing privatecode publicly to make their work more transparent across the company as well and they were alsousing Jenkins to facilitate continuous integration environment and in order to continuously delivertheir software

They have adopted GitHub as a Version Control platform

So GitHub actuallyfacilitated a lot of things for Dominion Enterprises and for that there were able to incorporatea continuous integration environment with Jenkins and they were actually sharing theircode and making software delivery even more faster

So this is how GitHub helped not onlyjust a minute Enterprises, but I'm sure there's might be common to a lot of other companiesas well

So let us move forward

So now this is the topic that we were waiting for andnow we'll learn what is get so git is a distributed Version Control tool and it supports distributednon linear workflow

So get is the tool that actually facilitates all the distributed VersionControl System benefits because it will provide you to create a local Repository

In yourlocal machine and it will help you to access your remote repository to fetch files fromthere or push files and do that

So get is the tool that you required to perform allthese operations and I'll be telling you all about how to perform these operations usingget later in this tutorial for now

Just think of get as a to that you actually need to doall kind of Version Control System task

So we'll move on and we'll see the differentfeatures of git now

So these are the different features of get is distributed get is compatibleget a provides you with the non linear workflow at avails you branching

It's very lightweightit provides you with speed

It's open source

It's reliable secure and economical

So letus take a look at all these features one by one

So the first feature that we're goingto look into is its distributed now, I've been like telling you it's a it's a distributor

Version Control tool that means that the feature that get provides you is that it gives youthe power of having a local repository and lets you have a local copy of the entire developmenthistory, which is located in the central repository and it will fetch all the files from the centralrepository to get your local repository always updated and this time calling it distributedbecause every was let's say that there might be a number of collaborators or developersso they might be living in different parts of the world

Someone might be working fromthe United States and one might be in India

So the word the project is actually distributed

Everyone has a local copy

So it is distributed worldwide you can say so this is what distributedactually means

So the next feature is that it is compatible

Now, let's say that youmight not be using get on the first place

But you have a different version control systemalready installed like SVN, like Apache subversion or CVS and you want to switch to get becauseobviously you're not happy with the centralized version control system and you want a moredistributed version control system

So you want to migrate from SVN to get but you areworried that you might have to transfer all the files all the huge amount of files thatyou have in your SVN repository into a git repository

Well, if you are afraid of doingthat, let me tell you you don't have to be anymore because get is compatible with asVM repositories as well

So you just have to download and install get in your systemand and you can directly access the SVN repository over a network which is the central repository

So the local repository that you'll have is going to be a good trip

The tree and if youdon't want to change your central repository, then you can do that as well

We can use getSVN and you can directly access all the files all the files in your project that is residingin an SVN repository

So do you don't have to change that and it is compatible with existingsystems and protocols but there are protocols like SSH and winner in protocol

So obviouslyget users SSH to connect to the central repository as well

So it is very compatible with allthe existing things so you don't have to so when you are migrating into get when you arestarting to use get you don't have to actually change a lot of things so is as I have everyoneunderstood these two features by so far Okay, the next feature of get is that it supportsnonlinear development of software

Now when you're working with get get actually recordsthe current state of your project by creating a tree graph from the index a tree that youknow is nonlinear now when you're working with get get actual records the current stateof the project by creating a tree graph from the index

And as you know that a tree isa non linear data structure and it is usually actually in the form of a directed acyclicgraph which is popularly known as the DH e

So, this is how I actually get facilitatesa nonlinear development of software and it also includes techniques where you can navigateand visualize all of your work that you are currently doing and how does it actually facilitateand when I'm talking about non-linearity, how does get actually facilitates a nonlineardevelopment is actually by Crunching now branching actually allows you to make a nonlinear softwaredevelopment

And this is the gift feature that actually makes get stand apart from nearlyevery other Version Control Management do because get is the only one which has a branchingmodel

So get allows and get actually encourages you to have a multiple local branches andall of the branches are actually independent of each other the and the creation and mergingand deletion of all these branches actually takes only a few seconds and there is a thingcalled the master Branch

It means the main branch which starts from the start of yourproject to the end of your project and it will always contain the production qualitycode

It will always contain the entire project and after that it is very lightweight nowyou might be thinking that since we're using local repositories on our local machine andwe're fetching all the files that are in the central repository

And if you think thatway you can know that there are like hon, maybe there are It's of people's pushing theircode into the central repository and and updating my local repository with all those files

So the data might be very huge but actually get uses lossless compression technique andit compresses the data on the client side

So even though it might look like that you'vegot a lot of files when it actually comes to storage or storing the data in your localrepository

It is all compressed and it doesn't take up a lot of space only when you're fetchingyour data from the local repository into your workspace

It converts it and then you canwork on it

And whenever you push it again, you can press it again and store it in a veryminimal space in your disk and after that it provides you with a lot of speed now, sinceyou have a local repository and you don't have to always travel over a network to fetchfiles, so it does not take any time to get files in your into your workspace from yourlocal repository because and if you see that it is actually Three times faster than fetchingdata from a remote repository because he's obviously have to travel over a network toget that data or the files that you want and Mozilla has actually performed some kind ofperformance tests and it is found out that get is actually one order of magnitude fasterthan other version control tools, which is actually equal to 10 times faster than otherversion control tools

And the reason for that is because get is actually written inC and C is not like other high-level languages

It is very close to machine language

So itproduces all the runtime overheads and it makes all the processing very fast

So getis very small and it get is very fast

And the next feature is that it is open source

Well, you know that get was actually created by Linus Torvalds and he's the famous manwho created the Linux kernel and he actually used get in the development of the Next Colonelnow, they were using a Version Control System called bitkeeper first, but it was not opensource day

So the owner of bitkeeper has actually made it a paid version and this actuallygot Linus Torvalds mad

So what he did is that he created his own version control systemtool and he came up with get and he made it open source for everyone so that you can sothe source code is available and you can modify it on your own and you can get it for free

So there is one more good thing about get and after that it is very reliable

Like I'vebeen telling you since the star that egg have a backup of all the files in your local repository

So if your central server crashes, you don't have to worry your files are all saving yourlocal repository and even if it's not in your local repository, it might be in some otherdevelopers local repository and you can tell him when and whenever you need some that dataand you lose the data and after your central server is all If it was crashed, he can directlypush all the data into the central repository and from there everyone and Skinner alwayshave a backup

So the next thing is that get is actually very secure now git uses the sha-1do name and identify objects

So whenever you actually make change it actually createsa commit object and after you have made changes and you have committed to those changes, itis actually very hard to go back and change it without other people knowing it becausewhenever you make a commit the sha-1 actually converts it what is sha-1

Well it is a kindof cryptographic algorithm

It is a message digest algorithm that actually converts yourcommit object into a four digit hexadecimal code Now message AI uses techniques and algorithmslike md4 md5 and it is actually very secure

It is considered to be very secure becauseeven National Security Agency of the United States of America uses ssj

I so if they'reusing it so you might know that it is very secure as well

And if you want to know what'smd5 and message digest I'm not going to take you through the whole algorithm whole cryptographicalgorithm about how they make that Cipher and all you can Google it and you can learnwhat is sji, but the main concept of it is that after you have made changes

You cannotdeny that you have not met changes because it will store it and everyone can see it itwill create commit hash for you

So everyone will see it and this commit hash can is alsouseful when you want to revert back to previous versions you want to know that which commitsexactly caused what problem and if you want to remove that commit or if you want to removethat version you can do that because sha I will give you the hash log of every governmentso we move on and see the Feature, which is economical now get is actually released underthe general public license and it means that it is for free

You don't have to pay anymoney to download get in your system

You can have kids without burning a hole in yourpocket

And since all the heavy lifting is done on the kind side because everything youdo you do it on your own entire workspace and you push it into the local repositoryfirst, and after that it's pushing the central server

So it means that people are only pushinginto the central server after when they're sure about their work and and they're notexperimenting on the central repository

So your central repository can be very simpleenough

You don't have to worry about having a very complex and very powerful hardwareand a lot of money can be saved on that as well

So get us free get a small so good providesyou with all the cool features that you would actually want

So this All the get features

So we'll go ahead to the next topic our next the first we'll see what is a repository nowas GitHub says that it is a directory or storage space where all your projects get live

Itcan be local to a folder on your computer like your local repository or it can be astorage space and GitHub or another online host

It means your central repository andyou can keep your gold files text files image files

You name it? You can keep it insidea repository everything that is related to your project and like I have been chantingsince the start of this tutorial that we have got two kinds of repositories

We've got thecentral repository and we've got the local repository and now let us take a look at whatthis repositories actually are

It's on my left hand side

You can see all about thecentral repository and in the right hand side

This is all about my local repository andthe diagram in the middle actually shows you the entire layout so the local repositorywill be inside my local machine and my central repository for now is going to be on GitHub

So my central repository is typically located on a remote server and like I just told youit is typically located on GitHub and my local repository is going to be my local machineat we reside in as in a DOT git folder and it will be inside your Project's root

Thedot git folder is going to be inside your Project's root and it will contain all thetemplates and all the objects and every other configuration files when you create your localrepository and since you're pushing all the code, your central repository will also havethe same dot git repository folder inside it and the sole purpose of having a centralrepository is so that you're all the Actors are all the developers can actually shareand exchange the data because someone might be working on a different problem and someonemight be needing help in that so what you can do is that he can push all the code allthe problems that he has sauce or something that he has worked on it to the central repositoryand everyone else can see it and everyone else can pull his code and use it for themselvesas well

So this is just meant for sharing data

Whereas in local repository

It is onlyyou can access it and it is only meant for your own so you can work in your local repository

You can work in isolation and no one will interfere even after you have done after yearssure that your code is working and you want to show it to everyone just transfer it orpush it into the central Repository

Okay, so now we'll be seeing the get operationsand come on

So this is how we'll be using it

There are various operations and commandsthat will help us to do all the things that we were just talking about right now

We'retalking about pushing changes

So these are all get operations

So we'll be performingall these operations will be creating repositories with this command will be making changes inthe files that are in a repositories with the commands will be also doing parallel nonlineardevelopment that I was just talking about and we also be sinking a repositories so thatour Central repository and local repository are connected

So I'll show you how to dothat one by one

So the first thing that we need to do is create repositories, so we needa central repository and we need a local repository now will host our Central repository on GitHub

So for that you need an account in GitHub

And create a repository there and for yourlocal repository you have to install get in your system

And if you are working on a completelynew project and if you want to start something fresh and very new you can just use git initto create your repository or if you want to join an ongoing project, and if you're newto the project and you just join so what you can do is that you can clone the central repositoryusing this command get blown

So let us do that

So let's first create a GitHub accountand create repositories on GitHub

I said first you need to go to github


And ifyou don't have an account, you can sign up for GitHub and here you just have to picka username that has not been already taken you have just provide your email address geta password and then just click this green button here and your account will be created

It's very easy don't have to do much and after that you just have to verify your email andeverything and after you're done with all sort of thing

You can just go a sign in ouralready have an account

So I'm just going to sign in here

Softer you're signed in you'llfind this page here

So you'll get two buttons where you can read the guide of how to useGitHub or you can just start a project right away

Now, I'll be telling you all about GitHubso you don't have to click this button right now

So you can just go ahead and start aproject

So now get tells that for every project you need to have you need to maintain a uniquerepository it is because it's very healthy and keeps things very clean because if youare storing just the files related to one project in a repository, you won't get confusedlater

So when you're creating a new repository, you have to provide with a repository namenow, I'm just going to name it get - GitHub

And you can provide it with the descriptionof this repository

And this is optional

If you don't want to you can leave it blankand you can choose whether you want it public or private

Now if you want to it to be private,you have to pay some kind of amount

So like this will cost you $7 a month

And so whatwhat is the benefit of having a private account? Is that only you can say it if you don't wantto share your code with anyone and you don't want anyone to see it

You can do that inGitHub as well

But for now, I'll just leave it public

I just want it for free and leteveryone see my work what you have done

So we'll just leave it up lik for now and afterthat you can initialize this repository with the read me

So the readme file will containthe description of your files

This is the first file that is going to be inside a repositorywhen you create the repository, so and it's a good habit to actually initialize your repositoryof the readme, so I'll just click this option

This is the option to add git ignore


There might be some kind of files that you don't want when you're making operations,like push or pull you don't want those files to get pushed or pulled like it might be somekind of log files or anything so you can add those files and get ignore here

So rightnow I don't have gone any files that this is just the starting of our project

So Iwill just ignore this get ignore for now

And then you can actually add some licenseas well

So you can just go through what this license actually are

But if you want to justleave it as none

And after that just click on this green button here, so just createa repository

And so there it is so you can see this is the initial comment you have initializedyour repository with the readme and this is your readme file

Now if you want to makechanges and do the read me file, just click on it and click on the edit pencil image oricon kind of that is in here and you can make changes on the readme files if you want towrite something

Let's say just write it as scription

So this is our tutorial purposeand that's it

Just keeping it simple

And after that you've made changes

The next thingthat you have to do is you have to commit a changes so you can just go down and clickon this commit changes green button here

And it's done

So you have updated read medot MD and this is your commit hash so you can see that in here

So if you go back toyour repository, you can say that something has been updated and will show you when wasyour last commit little even show you the time? So and for now you're on the branchmaster your and this will actually show you all the logs

So since only I'm contributinghere

So this is only one contributor and I've just made two commits

The first onewas when I initialized it and right now when I modified it and right now I have not createdany branches

So there is only one branch

So now my central repository has been created

So the next thing that I need to do is create a local repository in my local machine

NowI have already installed get in my system

I have using a Windows system

So I have installedget for Windows

So if you want some help with the installation, I have already writtena Blog on that

I'll leave the link of the blog in the description below

You can referto that blog and install get in your system

Now, I've already done that

So let's saythat I want my project to be in the C drive

So let's say I'm just waiting in folder herefrom my project

So just name it

Ed Eureka project and let's say that this is where Iwant my local repository to been

So the first thing that I'll do is right click and I'llclick this option here git bash here

And this will actually open up a very colorfulterminal for you to use and this is called the git bash emulator

So this is where you'llbe typing all your commands and you'll be doing all your work in the get back here

So in order to create your local repository, the first thing that you'll do is type inthis command get in it and press enter

So now you can see that it is initialized emptygit repository on this path

So, let's see and you can see that a DOT get of a folderhas been created here and if you see here and see you can see that it contains all theconfigurations and the object details and everything

So your repository is initializing

This is going to be your local repository

So after we have created a repositories, itis very important to link them because how would you know which repository to push intoand how will you just pull all the changes or all the files from a remote repository?If you don't know if they're not connected properly

So in order to connect them withthe first thing that we need to do is that we need to add a region and we're going tocall our remote repository as origin and we'll be using the command git remote add originto add so that we can pull files from our GitHub or Central repository

And in orderto fetch files

We can use git pull and if you want to transfer all your files or pushfiles into GitHub will be using git push

So let me just show you how to do that

Sowe are back in the local repository

And as you can see now that I have not got any kindof files

And if you go to my central repository, you can see that I've got a readme file

Sothe first thing that I need to do is to add this remote repository as my origin

So forthat I'll clear my screen first

So for that you need to use this command

Git remote addorigin

And the link of yours and the repository and let me just show you where you can findthis link

So when you go back into your repository, you'll find this green button here, whichis the Clone or download just click here

And this is the HTTP URL that you want

Sojust copy it on your clipboard

Go back to your git bash and paste it and enter so youroriginal has been added successfully because it's not showing any kind of Errors

So nowwhat will do is that will perform a git pull

It means will fetch all the files from thecentral repository into my local Repository

So just type in the command get full

originmaster And you can see that they have done some kind of fetching from the master Branchinto the master branch and let us see that whether all the files have been fished ornot

Let us go back to our local repository and there is the readme file that was in mycentral repository and now it is in my local repository

So this is how you actually updateyour local repository from the central repository you perform a git pull and it will fetch allthe files from this entire repository in your local machine

So let us move forward andmove ahead to the next operation

Now, I've told you in order to sync repositories, youalso need to use a git push, but since we have not done anything in our local repositorynow, so I'll perform the good get push later on after a show you all the operations andwe'll be doing a lot of things

So at the end I'll be performing the git push and pushall the changes into my central Repository

And actually that is how you should do thatthe it's a good habit and it's a good practice if you're working with GitHub and get is thatwhen you start working

The first thing that you need to do is make a get bull to fetchall the files from your central repository so that you could get updated with all thechanges that has been recently made by everyone else and after you're done working after you'resure that your code is running then only make the get Bush so that everyone can see it youshould not make very frequent changes into the central repository because that mightinterrupt the work of your other collaborators or other contributors as well

So let us moveahead and see how we can make changes

So now get actually has a concept it has an intermediatelayer that resides between your workspace and your local repository

Now when you wantto commit changes or make changes in your local repository, you have to add those filesin the index first

So this is the layer that is between your workspace and local repository

Now, if your files are not in the index, you cannot make commit organ app cannot make changesinto your local repository

So for that you have to use the command git add and you mightget confused that which all files are in the index and which all are not

So if you wantto see that you can use the command git status and after you have added the changes in theindex you can use the command git commit to make the changes in the local repository

Now, let me tell you what is exactly a git commit everyone will be talking about getcoming

Committing changes when you're making changes

So let us just know what is a gitcommit

So let's say that you have not made any kind of changes or this is your initialproject

So what a comet is is that it is kind of object which is actually a versionof your project

So let's say that you have made some changes and you have committed thosechanges what your version control system will do is that it will create another commit objectand this is going to be your different version with the changes

So your commit snapshotsactually going to contain snapshots of the project which is actually changed

So thisis what come it is

So I'll just show you I'll just go ahead and show you how to commitchanges in your local repository

So we're back into our local repository

And so let'sjust create some files here

So now if you're developing a project you might be just onlycontributing your source code files into the central repository

So now I'm not just goingto tell you all about coding

So we're just going to create some text files write somethingin that which is actually pretty much the same if you're working on a gold and you'restoring your source code in your repositories

So I just go ahead and create a simple textfile

Just name it Eddie one

Just write something so I'll just try first file

Save this fileclose it

So now remember that even if I have created inside this repository, this is actuallyshowing my work space and it is not in my local repository now because I have not committedit

So what I'm going to do is that I'm going to see what all files are in my index

Butbefore that I'll clear my screen because I don't like junk on my screen


So thefirst thing that we're going to see is that what all files are added in my index and forthat I just told you we're going to use the command git status

So you can see that itis calling anyone dot txt which we just have written

It is calling it an untracked filenow untracked files are those which are not added in the index yet

So this is newly created

I have not added it explicitly into the index

So if I want to commit changes in Eddie onedot txt, I will have to add it in the index

So for that I'll just use the command gitadd and the name of your file which is a D1 Dot txt

And it has been added

So now letus check the status again

So for that will choose get status

And you can see that changesready to be committed is the Eddie Wonder txt? Because it's in the index and now youcan commit changes on your local repository

So in order to commit the command that youshould be using is git commit

- em because whenever you are committing you always haveto give a commit message so that everyone can see who made all this comments and whatexactly is just so this commit message is just for your purpose that you can see whatexactly was changed

But even if you don't dry it it the version control system is alsogoing to do that

And if you have configured your get it is always going to show that who'sthe user who has committed this change

So I was just talking about writing a commitmessage

So I'm just going to write something like adding first commit and press enter soyou can see one file change something has been inserted

So this is the changes arefinally committed in my local repository

And if you want to see how actually get storesall this commit with actually I'll show you after I show you how to commit multiple filestogether

So let's just go back into our local Rebel folder and we'll just create some morefiles more text files

I'm just going to name it

I do do with create another one

Justname it I do three

Let's just write something over here

We just say second file


so let's go back to our get bash terminal and Now let us see the get status

So nowyou can see that it is showing that I do too and I do three are not in my index and ifyou remember anyone was already in the index, actually, let me just go back and make somemodifications in Eddie one as well

So I'm going to ride

modified one So, let's seeget status again

And you can see that it is showing that anyone is modified and thereare untracked files and you do and edit three

Because I haven't added them in my index yet

So now Sebastian and Jamie you have been asking me how to like a doll multiple files together

So now I'm going to add all these files at once so for that I'm just going to use getat - capital a Just press enter and now see the get status

And you see that all the fileshave been added to the index and ones

And it's similarly with commit as well

So nowthat you have added all the files in the index

I can also commit them all at once and howto do that

Let me just show you you just have to write git commit and - small a soif you want to commit all you have to use - small are in case of git commit whereasin case of get add if you want to add all the files you have to use - capital A

Sojust remember that difference and add message

hiding so you can see three files has beenchanged and now let me show you how this actually gets stores all this comets

So you can performan operation called the git log

And you can see so This Is 40 digit hexadecimal code thatI was taking a talking about and this is the sha-1 hash and you can see the date and youhave got the commit message that we have just provided where I just wrote adding three filestogether

It shows it it shows the date and the exact time and the author and this isme because I've already configured it with my name

So this is how you can see come inand this is actually how Version Control System like get actually stores all your commit

So let us go back and see the next operation which is how to do parallel development ornon-linear development

And the first operation is branching now, we've been talking aboutbranching a lot and let me just tell you what exactly is branching and what exactly youcan do with branching

Well, you can think of branches like a pointer to a To becomeit

Let's say that you've made changes in your main branch

Now remember that your mainbranch that I told you about

It's called The Master branch and the master Branch willcontain all the code

So let's say that you're working on the master branch and you've justmade a change and you've decided to add some new feature on to it

So you want to workon the new feature individually or you don't want to interfere with the master Branch

So if you want to separate that you can actually create a branch from this commit and let meshow you how to actually create branches

Now Alice tell you that there are two kindsof branches their local branches and remote tracking branches

Your remote branches arethe branches that is going to connect your branches from your local repository to yourcentral repository and local branches are something that you only create in your workspace

That is only going to work with your with the files in your local repository

So I'llshow you how to create branches and then everything will Clear to you

So let us go back to ourgit Bash

Clear the screen

And right now we are in the master branch and this indicateswhich brands you were onto right now

So we're in the master Branch right now and we're goingto create a different branch

So for that you just have to type the command git branchand write a branch name

So let us just call it first branch

and enter so now you havecreated a branch and and this first Branch will now contain all the files that were inthe master because it originated from the master Branch

So now the shows that you arestill in the master branch and if you want to switch to the new branch that you justcreated you have to use this command git checkout, but it's called checking out it going to movefrom one branch to another it's called checking out and get so we're going to use git checkoutand the name of the branch

Switch to first brush and now you can see that we are in thefirst branch and now we can start doing all the work in our first Branch

So let us createsome more files in the first Branch

So let's go back and this will actually show me myworkspace off my first Branch right now

So we'll just create another text document andwe're going to name it edu for and you can just write something first

garage to saveit just will go back and now we've made some changes

So let us just commit this changesall at once

So let me just use git add

After that, what do you have to do if you rememberis that you have to perform a git commit? And I guess one pile changed

So now rememberthat I have only made this edu for change in my first branch and this is not in my masterBranch it because now we are in the first Branch if it lists out all the files in thefirst Branch, you can see that you've got the Eddie one

I did 283 and the readme whichwas in the master Branch because it will be there because it originated from the masterbranch and apart from that

It has a new file called edu for DOT txt

And now if you justmove back into the master Branch, let's say We're going back into the Master Garage

Andif you just see the five Master Branch, you'll find that there is no edu for because I'veonly made the changes in my first Branch

So what we have done now is that we have createdbranches and we have also understood the purpose of creating branches because you're movingon to the next topic

The next thing we'll see is merging so now if you're creating branchesand you are developing a new feature and you want to add that new feature, so you haveto do an operation called emerging emerging means combining the work of different branchesall together and it's very important that after you have branched off from a masterBranch always combine it back in at the end after you're done working with the branchalways remember to merge it back in so now we have created branches

Let us see and wehave made changes in our Branch like we have added edu for and if you want to combine thatback in our Master Branch because like I told you your master Branch will always containyour production quality

Code so let us know actually merge start merging those files becauseI've already created branches

It's time that we merge them

So we are back in my terminal

And what do we need to do is merge those changes and if you remember that we've got a differentfile in my first branch, which is the ending for and it's not there in the master Branchyet

So what I want to do is merge that Branch into my master Branch so for that I'll usea command called git merge and the name of my branch and there is a very important thingto remember when you're merging is that you want to merge the work of your first Branchinto master

So you want Master to be the destination

So whenever you're merging youhave to remember that you were always checked out in the destination Branch some alreadychecked out in the master Branch, so I don't have to change it back

So I'll just use thecommand git merge and the name of the branch which word you want to merge it into and youhave to provide the name of the branch whose work you want merged into the current branchthat you were checked out

So for now, I've just got one branch, which is called the firstbranch

and and so you can see that one file chain

Something has been added

We are inthe master bounce right now

So now let us list out all the files in the master branchand there you see now you have edu for DOT txt, which was not there before

I'm mergedit

So this is what merging does now you have to remember that your first branch is stillseparate

Now, if you want to go back into your first branch and modify some changesagain in the first branch and keep it there you can do that

It will not actually affectthe master Branch until you merge it

So let me just show you an example

So just go backto my first branch

So now let us make changes and add you for

I'll just ride modified infirst branch

We'll go back and we'll just commit all these changes and I'll just usegit

So now remember that the git commit all is also performed for another purpose now

It doesn't only actually commit all the uncommitted file at once if your files are in the indexand you have just modified it also does the job of adding it to the index Again by modifyingit and then committing it but it won't work

If you have never added that file in the indexnow Eddie for was already in the index now after modifying it I have not explicitly addedin the index

And if I'm using git commit all it will explicitly add it in the indexbit will because it was already a track file and then it will commit the changes also inmy local Repository

So you see I didn't use the command git add

I just did it with Gitcommit because it was already attract file

So one file has been changed

So now if youjust just cat it and you can see that it's different

It shows the modification thatwe have done, which is modified it first Branch now, let's just go back to my master branch

Now remember that I have not emerged it yet and my master Branch also contains a copyof edu for and let's see what this copy actually contains

See you see that the modificationhas not affected in the master Branch because I have only done the modification in the firstBranch

So the copy that is in the master branch has not it's not the modified copybecause I have not emerged it yet

So it's very important to remember that if you actuallywant all the changes that you have made in the first Branch all the things that you havedeveloped in the Anu branch that you have created make sure that you merge it in don'tforget to merge or else it will not show any kind of modifications

So I hope that if understoodwhy emerging is important how to actually merge different branches together

So we'lljust move on to the next topic and which is rebasing now when you say rebasing rebasingis also another kind of merging

So the first thing that you need to understand about vbaseis that it actually solves the same problem as of git merge and both of these commandsare designed to integrate changes from one branch into another

It's just that they justdo the same task in a different way

Now what rebasing means if you see the workflow diagramhere is that you've got your master branch and you've got a new Branch now when you'rerebasing it what it does if you see in this workflow diagram here is that if God a newbranch and your master branch and when your rebasing it instead of creating a Comet whichwill have two parent commits

What rebasing does is that it actually places the entirecommit history of your branch onto the tip of the master

Now you would ask me

Why shouldwe do that? Like what is the use of that? Well, the major benefit of using a re basisthat you get a much cleaner project history

So I hope you've understood the concept ofrebase

So let me just show you how to actually do rebasing


So what we're going todo is that we're going to do some more work in our branch and after that will be baseour branch on to muster

So we'll just go back to our branch

You skip check out

firstbranch and now we're going to create some more files here

same it at your five andlet's say I do six

So we're going to write some random stuff

I'd say we're saying welcometo Ed, Eureka

one all right the same thing again that Sarah come two so we have createdthis and now we're going back to our get bash and we're going to add all these new filesbecause now we need to add because it we cannot do it with just get commit all because theseare untracked files

This is the files that I've just created right now

So I'm usingAnd now we're going to commit

And it has been committed

So now if you just see allthe files, you can see any one two, three, four five six and read me and if you go backto the master

And if you just list out all the files and master it only has up to fourthe five and six are still in my first brush and I have not emerged it yet

And I'm notgoing to use git merge right now

I'm going to use rebase this time instead of using gitmerge and this you'll see that this will actually do the same thing

So for that you just haveto use the command

So let us go back to our first branch

Okay did a typing error? IrstBR a MCH

Okay switch the first branch and now we're going to use the command git rebasemaster

Now it is showing that my current Branch first branch is up to date just becausebecause whatever is in the master branch is already there in my first branch and theywere no new files to be added

So that is the thing

So, but if you want to do it inthe reverse way, I'll show you what will happen

So let's just go and check out let's do rebasingkit rebase first branch

So now what happened is that all the work of first branch has beenattached to the master branch and it has been done linearly

There was no new set of comments

So now if you see all the files are the master Branch, you'll find that you've got a newfive and Ed U6 as well, which was in the first Branch

So basically rebasing has merged allthe work of my first Branch into the master, but the only thing that happened is that ithappened in a linear way all the commits that we did in first Branch actually got rid dashedto the head in the master

So this was all about nonlinear development

I have told youabout branching merging rebasing we've made changes with pull changes committed changes,but I remember that I haven't shown you how to push changes

So since we're done workingin our local repository now, we have made are all final changes and now we want it tocontribute in our Central Repository


So for that we're going to use git push andI'm going to show you how to do a get Bush right now

Before I go ahead to explain youa get Bush

You have to know that when you are actually setting up your repository

Ifyou remember your GitHub repository as a public repository, it means that you're giving aread access to everyone else in the GitHub community

So everyone else can clone or downloadyour repository files

So when you're pushing changes in a repository, you have to knowthat you need to have certain access rights because it is the central repository

Thisis where you're storing your actual code

So you don't want other people to interferein it by pushing wrong codes or something

So we're going to connect a mice and repositoryvia ssh in order to push changes into my central repository now at the beginning when I wastrying to make this connection with SSS rows facing some certain kind of problems

Letme go back to the repository of me show you when you click this button

You see that thisis your HTTP URL in order that we use in order to connect with yours and repository now ifyou want to use SSH, so this is your SSH connection URL

So so in order to connect with ssh, whatdo you need to do is that you have to generate a public SSH key and then just add that keysimply into your GitHub account

And after that you can start pushing changes

So firstwe'll do that will generate SSH public key

So for that, we'll use this command SSH - heejun

So under file, there is already an SSH key, so they want to override it


So my SSHkey has been generated and it has been saved in here

So if I want to see it and just usecat and copy it

So this is my public SSH key if I want to add this SSH key, I'll goback into my GitHub account

And here I will go back and settings and we'll go and clickon this option SSH and gpg keys and I've already had two SSH Keys added and I want to add mynew one

So I'm going to click this button new SSH key and just make sure that you providea name to it

I'm just going to keep it in order because I've named the other ones ssshwon an SSS to just say I'm going to say it's sh3

So just paste your search key in here

Just copy this key

Paste it and click on this button, which is ADD SSH key

Okay, sonow well the first thing you need to do is clear the screen

And now what you need todo is you need to use this command as the search - d And your SSI at URL that we usewhich is get at the rate github


And enter so my SSH authentication has been successfullydone

So I'll go back to my GitHub account

And if I refresh this you can see that thekey is green

It means that it has been properly authenticated and now I'm ready to push changeson to the central repository

So we'll just start doing it

So let me just tell you onemore thing that if you are developing something in your local repository and you have doneit in a particular branch in your repository and let's say that you don't want to pushthis changes into the master branch of your central report or your GitHub repository

So let's say that whatever work that you have done

It will stay in a separate branch inyour GitHub repository so that it does not interfere with the master branch and everyonecan identify that it is actually your branch and you have created it and this Branch onlycontains your work

So for that let me just go to the GitHub repository and show you something

Let's go to the repositories

And this is the repository that I have just created today

So when you go in the repository, you can see that I have only got one branch here,which is the master branch

And if I want to create branches I can create it here, butI would advise you to create all branches from your command line or from you get bashonly in your central repository as well

So let us go back in our branch

So now whatI want is that I want all the work of the first branch in my local repository to makea new branch in the central repository and that branch in my central repository willcontain all the files that is in the first branch of my local repository through so forthat I'll just perform

get Push the name of my remote which is origin and first branch

And you can see that it has pushed all the changes

So let us verify

Let us go backto our repository and let's refresh it

So this is the master branch and you can seethat it has created another branch, which is called the first Branch because I havepushed all the files from my first Branch into an and I have created a new Branch orfirst Branch as similar to my first branch in my local repository here in GitHub

Sonow if we go to Branch you can see that there is not only a single Master we have also gotanother branch, which is called the first Branch now if you want to check out this brandjust click on it

And you can see it has all the files with all the combat logs here inthis Branch

So this is how you push changes and if you want to push all the change into master you can do the same thing

Let us go back to our Branch master

And we're goingto perform a git push here

But only what we're going to do this time is we're goingto push all the files into the master branch and my central repository

So for that I'lljust use this get bush

Okay, so the push operation is done

And if you go back hereand if you go back to master, you can see that all the files that were in the masterbranch in

My local repo has been added into the master branch of my central Ripple also

So this is how you make changes and from your central repository to look repository

Sothis is exactly what you do with get so if I have to summarize what I just showed youentirely in this when I'm when I was telling about get ad and committing and pushing andpulling so this is exactly what is happening

So this is your local repository

This isyour working directory

So the staging area is our index the intermediate layer betweenyour workspace and your local repository

So you have to add your files into the stagingarea or the index with Git add and a commit those changes with Git commit and your localrepository and if you want to push all this Listen to the remote repository or the centralrepository where everyone can see it you use a get Bush and similarly

If you want to pullall those files of fetch all those files from your GitHub repository, you can use git pulland you want to use branches

If you want to move from one branch to another you canuse git checkout

And if you want to combine the work of different branches together, youcan use git merge

So this is entirely what you do when you're performing all these kindof operations

So I hope it is clear to everyone so I'll just show you how can you check outwhat has been changed and modifications so So just clear the screen and okay

So letus go back to our terminal and just for experimentation proper just to show you that how we can actuallyget revert back to our previous changes

So now you might not want to change everythingthat you made an Eddie wanted to do a duet for or some other files that we just created

So let's just go and create a new file modify it two times and revert back to the previousversion just for demonstration purpose

So I'm just going to create a new text file

Let's call it revert

And now let us just type something


Let's just keep itthat simple

Just save it and go back

We'll add this file

then commit this let's sayjust call it revert once just remember that this is the first comment that I made withrevert one enter

So it has been changed

So now let's go back and modify this

So afterI've committed this file, it means that it has stored a version with the text Hello exclamationin my revert text file

So I'm just going to go back and change something in here

SoI'm just let us just add there

Hello there

Save it

Let's go back to our bash


Letus commit this file again because I've made some changes and I want a different versionof the revert file

So we'll just go ahead and commit again

So I'll use git commit all

Saints River do and enter and it's done

So now if I want to revert back to okay, so nowyou just see the file

You can see I've modified it

So now it has got hello there

Let's saythat I want to go back to my previous version

I would just want to go back to when I hadjust hello

So for that, I'll just check my git log

I can check the hair that this isthe commit log or the commit hash

When I first committed revert it means that thisis the version one of my revert

Now, what you need to do is that you need to copy thiscommit hash

Now, you can just copy the first eight hexadecimal digits and that will beit

So just copy it whole I just clear the screen first

So you just need to go use thiscommand get check out and hexadecimal code or the hexadecimal digits that you just copiedand the name of your file, which is revert Dot txt

So you just have to use this commandkit

Check out and the commit hash that you just copied the first 8 digits and you haveto name the file, which is revert Dot txt

So now when you just see this file, you havegone back to the previous commit

And now when you just display this file, you can seethat now I've only got just hello

It means that I have rolled back to the previous versionbecause I have used the commit hash when I initially committed with the first change

So this is how you revert back to a previous version

So this is what we have learned todayin today's tutorial

We have understood

What is Version Control and why do we need versioncontrols? And we've also learned about the different version control tools

And in thatwe have primarily focused on get and we have learned all about git and GitHub about howto create repositories and perform some kind of operations and commands in order to pushpull and move files from one repository to another we've also studied about the featuresof git and we've also seen a case study about how Dominion Enterprises which is one of thebiggest public In company who makes very popular websites that we have got right now

We haveseen how they have used GitHub as well

Hello everyone

This is order from 80 Rekha in today'ssession will focus on what is Jenkins

So without any further Ado let us move forwardand have a look at the agenda for today first

We'll see why we need continuous integration

What are the problems that industries were facing before continuous integration was introducedafter that will understand what exactly is continuous integration and will see varioustypes of continuous integration tools among those countries integration tools will focuson Jenkins and we'll also look at Jenkins distributed architecture finally in our handson part will prepare a build pipeline using Jenkins and I'll also tell you how to addJenkins slaves now, I'll move forward and we'll see why we need continuous integration

So this is the process before continuous integration over here, as you can see that there's a groupof developers who are making changes to the source code that is present in the sourcecode repository

This repository can be a git repository subversion repository Etc

And then the entire source code of the application is written it will be built by tools likeand Maven Etc

And after that that built application will be deployed onto the test server fortesting if there's any bug in the code developers are notified with the help of the feedbackloop as you can see it on the screen and if there are no bugs then the application isdeployed onto the production server release

I know you must be thinking that what is theproblem with this process is process looks fine

As you first write the code then youbuild it

Then you test it and finally you deploy but let us look at the flaws that werethere in this process one by one

So this is the first problem guys as you can see thatthere is a developer who's waiting for a long time in order to get the test results as firstthe entire source code of the application will be built and then only it will be deployedonto the test server for testing

It takes a lot of time so developers have to For along time in order to get the test results

The second problem is since the entire sourcecode of the application is first build and then it is tested

So if there's any bug inthe code developers have to go through the entire source code of the application as youcan see that there is a frustrated developer because he has written a code for an applicationwhich was built successfully but in testing there were certain bugs in that so he hasto check the entire source code of the application in order to remove that bug which takes alot of time so basically locating and fixing of bugs was very time-consuming

So I hopeyou are clear with the two problems that we have just discussed now, we'll move forwardand we'll see two more problems that were there before continuous integration

So thethird problem was software delivery process was slow developers were actually wastinga lot of time in locating and fixing of birds instead of building new applications as wejust saw that locating and fixing of bugs was a very time-consuming task due to whichdevelopers are not able to focus on building new applications

You can relate that to thediagram which is present in front of your screen as Always a lot of time in watchingTV doing social media similarly developers were also basic a lot of time in fixing bugs

All right

So let us have a look at the fourth problem that is continuous feedback continuesfeedback related to things like build failures test status Etc was not present due to whichthe developers were unaware of how their application is doing the process that you showed beforecontinuous integration

There was a feedback loop present

So what I will do I will goback to that particular diagram and I'll try to explain you from there

So the feedbackloop is here when the entire source code of the application is built and tested then onlythe developers are notified about the bugs in the code

All right, when we talk aboutCantonese feedback suppose this developer that I'm highlighting makes any commit tothe source code that is present in the source code repository

And at that time the codeshould be pulled and it should be built and the moment it is built the developer shouldbe notified about the build status and then once it is built successfully it is then deployedonto the test server for testing at that time

Whatever the test data says the developershould be notified about it

Similarly, if this developer makes any commit to the sourcecode at that time

The coach should be pulled

It should be built and the build status shouldbe notified the developers after that

It should be deployed onto the test server fortesting and the test results should also be given to the developers

So I hope you allare clear

What is the difference between continents feedback and feedback? So incontinencefeedback you're getting the feedback on the run

So we'll move forward and we'll see howexactly continuous integration addresses these problems

Let us see how exactly continuousintegration is resolving the issues that we have discussed

So what happens here, thereare multiple developers

So if any one of them makes any commit to the source code thatis present in the source code repository, the code will be pulled it will be built testedand deployed

So what advantage we get here

So first of all, any comment that is madeto the source code is built and tested due to which if there is any bug in the code developersactually know where the bug is present or bitch come it has caused that error so theydon't need to go through the entire source code of the application

They just need tocheck that particular

Because introduce the button

All right

So in that way locatingand fixing of bugs becomes very easy apart from that the first problem that we saw thedevelopers have to wait for a long time in order to get the test result here every commitmade to the source code is tested

So they don't need to wait for a long time in orderto get the test results

So when we talk about the third problem that was software deliveryprocess was slow is completely removed with this process developers are not actually focusingon locating and fixing of bugs because that won't take a lot of time as we just discussedinstead of that

They're focusing on building new applications

Now a fourth problem wasthat there is feedback was not present

But over here as you can see on the Run developersare getting the feedback about the build status test results Etc developers are continuouslynotified about how their application is doing

So I will move forward now, I'll compare thetwo scenarios that is before continuous integration and after continuous integration now overhere what you can see is before continuous integration as we just saw first the sourcecode of the application will be built the entire source code then only it will be tested

But when we talk about after continuous integration every commit whatever change you made in thesource code whatever change the my new changes

Well you committed to the source code thattime only the code will be pulled

It will be built and then lll be tested developershave to wait for a long time in order to get the test results as we just saw because the- source code will be first build and then it will be deployed onto the test server

But when we talk about continuous integration the test result of every come it will be givento the developers and when we talk about feedback, there was no feedback that was present earlier,but in continuous integration feedback is present for every committee met to the sourcecode

You will be provided with the relevant result

All right, so now let us move forwardand we'll see what exactly is continuous integration now in continuous integration process developersare required to make frequent commits to the source code

They have to frequently makechanges in the source code and because of that any change made in the source code, itwill report by The Continuous integration server, and then that code will be built oryou can say it will be compiled

All right now

Pentagon The Continuous integration toolthat you are using or depending on the needs of your organization

It will also be deployedonto the test server for testing and once testing is done

It will also be deployedonto the production server for release and developers are continuously getting the feedbackabout their application on the run

So I hope I'm clear with this particular process

Sowe'll see the importance of continuous integration with the help of a case study of Nokia

SoNokia adopted a process called nightly build nightly build can be considered as a predecessorto continuous integration

Let me tell you why

All right

So over here as you can seethat there are there are developers who are committing changes to the source code thatis present in a shared repository

All right, and then what happens in the night? Thereis a build server

This build server will pull the shared repository for changes andthen it'll pull that code and prepare a bill

All right

So in that way whatever commitsare made throughout the day are compiled in the night

So obviously this process is betterthan writing the entire source code of the application and then Bai Ling it but againsince if there is any bug in the code developers have to check all the comments that have beenmade throughout the day so it is not the ideal way of doing things because you are againwasting a lot of time in locating and fixing of bucks

All right, so I want answers fromyou all guys

What can be the solution to this problem

How can Nokia address is particularproblem since we have seen what exactly continuous integration is and why we need now withoutwasting any time

I'll move forward and I'll show you how Nokia solved this problem

SoNokia adopted continuous integration as a solution in which what happens developerscommit changes to the source code in a shared repository

All right, and then what happensis a continuous integration server this continuous integration server pose the repository forchanges if it finds that there is any change made in the source code and it will pull thecode and compile it

So what is happening the moment you commit a change in the sourcecode continuous integration server will pull that and prepare a build

So if there is anybug in the code developers know which government is causing that error

All right, so theycan do Go through that particular commit in order to fix the bug

So in this way locatingand fixing of box was very easy, but we saw that in nightly builds if there is any bugthey have to check all the comments that have been made throughout the day

So with thehelp of continuous integration, they know which commits is causing that error

So locatingin fixing of bugs didn't take a lot of time

Okay before I move forward, let me give youa quick recap of what we have discussed till now first

We saw why we need continuous integration

What were the problems that industries were facing before continuous integration was introducedafter that

We saw how continuous integration addresses those problems and we understoodwhat exactly continuous integration is

And then in order to understand the importanceof continuous integration, we saw case study of Nokia in which they shifted from nightlybuild to continuous integration

So we'll move forward and we'll see various continuousintegration tools available in the market

These are the four most widely used continuousintegration tools

First is Jenkins on which we will focus in today's session then buildbotTravis and bamboo

Right and let us move forward and see what exactly jenkins's so Jenkinsis a continuous integration tool

It is an open source tool and it is written in Javahow it achieves continuous integration

It does that with the help of plugins

Jenkinshave well over a thousand plugins

And that is the major reason why we are focusing onJenkins

Let me tell you guys it is the most widely accepted tool for continuous integrationbecause of its flexibility and the amount of plugins that it supports

So as you cansee from the diagram itself that it is supporting various development deployment testing Technologies,for example gate Maven selenium puppet ansible lawgivers

All right

So if you want to integratea particular tool you need to make sure that plug-in for that tool is installed in yourJenkins the for better understanding of Jenkins

Let me show you the Jenkins dashboard

I'veinstalled Jenkins in my Ubuntu box

So if you want to learn how to install Jenkins,you can refer the Jenkins installation video

So this is a Jenkins dashboard guys, as youcan see that there are currently no jobs because of that this section is empty otherwise We'llgive you the status of all your build jobs over here

Now when you click on new item,you can actually start a new project all over from scratch

All right

Now, let us go backto our slides

Let us move forward and see what are the various categories of pluginsas I told you earlier is when the Jenkins achieves continuous integration with the helpof plugins

All right, and Jenkins opposed well over a thousand plugins and that is themajor reason why Jenkins is so popular nowadays

So the plug-in categorization is there onyour screen but there are certain plugins for testing like j-unit selenium Etc whenwe talk about reports, we have multiple plugins, for example HTML publisher for notification

Also, we have many plugins and I've written one of them that is Jenkins build notificationplug-in and we talked about deployment we have plugins like deploy plug-in when we talkabout compiled we have plugins like Maven and Etc

Alright, so let us move forward andsee how to actually install a plug-in on the same about to box where my Jenkins is installed

So over here in order to install Jenkins, what you need to do is you need to click onmanage

Ken's option and overhead, as you can see that there's an option called manageplugins

Just click over there

As you can see that it has certain updates for the existingplugins, which I have already installed

Right then there's an option called installed whereyou'll get the list of plugins that are there in your system

All right, and at the sametime, there's an option called available

It will give you all the plugins that areavailable with Jenkins

Alright, so now what I will do I will go ahead and install a plug-inthat is called HTML publisher

So it's very easy

What you need to do is just type thename of the plug-in

Headed HTML publisher plugin, just click over there and installwithout restart

So it is now installing that plug-in we need to wait for some time

Soit has now successfully installed now, let us go back to our Jenkins dashboard

So wehave understood what exactly Jenkins is and we have seen various 10 kids plugins as well

So now is the time to understand Jenkins with an example will see a general workflow howJenkins can be used

All right

So let us go back to our slides

So now as I have toldyou earlier as well, we'll see Jenkins example, so let us move forward

So what are what ishappening developers are committing changes to the source code and that source code ispresent in a shared repository

It can be a git repository subversion repository orany other repository

All right

Now, let us move forward and see what happens now nowwe're here what is happening

There's a Jenkins server

It is actually polling the sourcecode repository at regular intervals to see if any developer has made any commit to thesource code

If there is a change in the source code it will pull the code and we'll preparea build and at the same time developers will be notified about the build results now, letus execute this practically

All right, so I will again go back to my Jenkins dashboard,which is there in my Ubuntu bar

What had what I'm going to do is I'm going to createa new item read basically a new project now over here

I'll give a suitable named my projectyou can use any name that you want

I'll just write compile

And now I click on freestyleproject

The reason for doing that is free-style project is the most configurable and the flexibleoption

It is easier to set up as well

And at the same time many of the options thatwe configure here are present in other build jobs as well move forward with freestyle projectand I'll click on ok now over here what I'll do, I'll go to the source code managementTab and it will ask you for what type of source code management you want

I'll click on getand over here

You need to type your repository URL in my case

It is http


com yourusername slash the name of your Repository

And finally dot get all right now in the billauction, you have multiple options

All right

So what I will do I click on invoke top-levelMaven targets

So now over here, let me tell you guys it may even has a built life cycleand that build life cycle is made up of multiple build phases

Typically the sequence for buildphase will be festive validate the code then you compile it

Then you test it

Then youperform unit test by using suitable unit testing framework

Then you package your code in adistributable format like a jars, then you verify it and you can actually install anypackage that you want with the help of install build phase and then you can deploy it inthe production environment for release

So I hope you have understood the maven buildlife cycle

So in the goals tab, so what I need to do is I need to compile the code thatis present in the GitHub account

So for that in the gold stabbed I need to write compile

So this will trigger the compile build phase of Maven now, that's it guys

That's it

Justclick on apply

And save now on the left hand side

There's an option called bill now totrigger the built just click over there and you will be able to see the the Builder startingin order to see the console output

You can click on that build and you see the consoleoutput

So it has validated the GitHub account and it is now starting to compile that codewhich is there in the GitHub account

So we are successfully compiled the code that waspresent in the GitHub account

Now, let us go back to the Jenkins dashboard

Now in thisJenkins dashboard, you can see that my project is displayed over here

And as you can seethe blue color of the ball indicates that as that it has been successfully executed

All right

Now, let us go back to the slides now, let us move forward and see what happens

Once you have compile your code

Now the code that you have compiled you need to test it

All right

So what Jenkins will do it will deploy the code onto the test server for testingand at the same time developers will be notified about the test results as well

So let usagain execute this practically, I'll go back to my Ubuntu box again

So in the GitHub repository,the test cases are already defined

Alright, so we are going to analyze those test caseswith the help of Maven

So let me tell you how to do it will again go and click on newitem on over here will give any suitable name to a project

I'll just type test

I'll againuse freestyle project for the reason that I've told you earlier click on OK and in thesource code management tab

Now before applying unit testing on the code that I've compiled

I need to First review it with the help of PMD plug-in

I'll do that

So for that I willagain click on new item and a over here

I need to type the name of the project

So I'lljust type it as code underscore review

Freestyle project click


Now the source code managementtab

I will again choose gate and give my repository URL https


com username/ name of the Repository

Kit All right now scroll doubt now in the build tab

I'm goingto click over there

And again, I will click on invoke top-level Maven targets now in orderto review the code

I am going to use the Matrix profile of Maven

So how to do that

Let me tell you you need to type here - p Matrix PMD: PMD, all right, and this willactually produce a PMD report that contains all the warnings and errors now in the postBill action tab, I click on publish PMD analysis result

That's all click on apply and Savethe finally click on Bill now

And let us see the console output

So it has now pulledthe code from the GitHub account and Performing the code review

So they successfully reviewthe code now

Let us go back to the project over here

You can see an option called PMDwarnings just click over there and it will display all the warnings that are there presentin your code

So this is the PMD Alice's report over here

As you can see that there are total11 warnings and you can find the details here as well like package you have then you havethen you have categories then the types of warnings which are there like for example,empty cache blocks empty finally block

Now, you have one more tab called warnings overthere

You can find where the warning is present the filename package

All right, then youcan find all the details in the details tab

It will actually tell you where the warningis present in your code

All right

Now, let us go back to the Jenkins dashboard and nowwe'll perform unit tests on the code that we have compiled for that again

I'll clickon new item and I'll give a name to this project

I will just type test

And I click on freestyleproject


Now in the source code management tab, I'll click on get now over here

I'lltype the repository URL http


com / username / name of the Repository

Kit and in thebuild option I click on again invoke top-level Maven targets now over here as I've told youearlier as well that Maven build life cycle has multiple build phases like first it wouldvalidate the code compile then tested package that will verify then it will install if certainpackages are required

And then finally it will deploy it


So one of the phaseis actually testing that performs unit testing using the suitable unit testing framework

The test cases are already defined in my GitHub account

So to analyze the test case in theGold section, I need to write tests

All right, and it will invoke the test phase of the mavenbuild life cycle

All right, so just click on apply and Save finally click on BuilderTo see the console output click here now in the source code management tab

I'll selectget all right over here again

I need to type my repository URL

That is HTTP github

com/ username

/ repository name dot get and now in the build tab

I'll select invoke top-levelMaven targets and over here as I have told you earlier as well that the maven build lifecycle has multiple phases

All right, and one of that phase is unit tests, so in orderto invoke that unit test what I need to do is in the goals tab, I need to write testsand it will invoke the test build phase of the maven build life cycle

All right

Sothe moment I write tests here and I'll build it

It will actually analyze the test casesthat are present in the GitHub account

So let us write test and apply and Save Finallyclick on Bill now

And in order to see the console output click here

So does pull thecode from the GitHub account and now it's performing unit test

So we have successfullyperform testing on that code now, I will go back to my Jenkins dashboard or as you cansee that all the three build jobs that have executed a successful which is indicated withthe help of view colored ball

All right

Now, let us go back to our slides

So we havesuccessfully performed in unit tests on the test cases that were there on the GitHub accountnow, we'll move forward and see what happens after that

Now finally, you can deploy thatbuild application or to the production environment for release, but when you have one singleJenkins over there are multiple disadvantages

So let us discuss that one by one so we'llmove forward and we'll see what are the disadvantages of using one single Jenkins over now

WhatI'll do I'll go back to my Jenkins dashboard and I'll show you how to create a build pipeline

All right

So for that I'll move to my Ubuntu box

Once again now over here you can seethat there is an option of plus

Ok, just click over there now over here click on buildpipeline view, whatever name you want

You can give I'll just give it as a do Rekha

pipeline And click on ok

Now over here what you can do you can give some certain descriptionabout your bill pipeline

All right, and there are multiple options that you can just havea look and over here

There's an option called select initial job

So I want compiled tobe my first job and there are display options over here number of display builds that youwant

I'll just keep it as 5 now the row headers that you want column headers, so you can justhave a look at all these options and you can play around with them just for the introductoryexample, let us keep it this way now finally click on apply and ok

Currently you can seethat there is only one job that is compiled

So what I'll do, I'll add more jobs this pipelinefor that

I'll go back to my Jenkins dashboard and over here

I'll add code review as well

So for that I will go to configure

And in this bill triggers tab, what I'll do I clickon build after other projects are built

So whatever project that you want to executebefore code review just type that so I want compile

Yeah, click on compile and over here

You can see that there are multiple options like trigger only if build stable trigger,even if the build is unstable trigger, even if the build page so I'll just click on atrigger even if the bill fails

All right, finally click on apply and safe

Similarlyif I want to add my test job as well to the pipeline

I can click on configure and againthe bill triggers tab

I'll click on build after other projects are built

So overheadtype the project that you want to execute before this particular project in our case

It is code review

So let us click over there trigger, even if the build fails apply andSave Now let us go back to the dashboard and see how our pipeline looks like

So this isour pipeline

Okay, so when we click on run Let us see what happens first

It will compilethe code from the GitHub account

That is it will pull the code and it will compileit

So now this compile is done

All right, now it will review the code

So the code reviewhas started in order to see the log

You can click on Console

It will give you the consoleoutput

Now once code review is done

It will start testing

It will perform unit testsor it's a code has been successfully reviewed with the as you can see the color has becomegreen

Now, the testing has started it will perform unit tests on the test case is thatthere in the GitHub account? So we have successfully executed three build jobs that is compilethe code then review it and then perform testing

All right, and this is the build pipelineguys

So let us go back to the Jenkins dashboard

And we'll go back to our slides now

So nowwe have successfully performed unit tests on the test cases that are present in theGitHub account

All right

Now, let us move forward and see what else you can do withJenkins

Now the application that we have tested that can also be deployed onto theproduction server for release as well

Alright, so now let us move forward and see what arethe disadvantages of this one single Jenkins over

So there are two major disadvantagesof using one single Jenkins over first is you might require different environments foryour builds and test jobs

All right

So at that time one single Jenkins over cannot servea purpose and the second major disadvantages suppose

You have a heavier projects to buildon regular basis

So at that time one single Jenkins server cannot simply handle the load

Let us understand this with an example suppose

If you need to run web test using InternetExplorer

So at that time you need a Windows machine, but your other build jobs might requirea Linux box

So you can't use one single Jenkins over

All right, so let us move forward

Seewhat is actually the solution to this problem the solution to this problem is Jenkins distributedarchitecture

So the Jenkins distributed architecture consists of a Jenkins master and multipleJenkins slave

So this Jenkins Master is actually used for scheduling build jobs

It also dispatchesbuilds to the slaves for actual execution

All right, it also monitors a slave that ispossibly taking them online and offline as required and it also records and presentsthe build results and you can directly executable job or Master instance as well

Now when wetalk about Jenkins slaves, these slaves are nothing but the Java executable that are presenton remote machines

All right, so these slaves basically here's the request of the Jenkinsmaster or you can say they perform the jobs As Told by the Jenkins Master they operateon variety of operating system

So you can configure Jenkins in order to execute a particulartype of builds up on a particular Jenkins slave or on a particular type of Jenkins slaveor you can actually let Jenkins pick the next available

Budget get slave

All right

NowI go back again to my Ubuntu box and I'll show you practically how to add Jenkins slavesnow over here as you can see that there is an option called Mana Jenkins just click overthere and when you scroll down you'll see man option called managed nodes under theleft hand side

There is an option called new node

Just click over there click on permanentagent give a name to your slave

I'll just give it as slave underscore one

Click onOK over here

You need to write the remote root directory

So I'll keep it as slash homeslash Edureka

And labels are not mandatory still if you want you can use that and launchmethod

I want it to be launched slave agents via SSH

All right over here

You need togive the IP address of your horse

So let me show you the IP address of my Host thismy Jenkins slave, which I'll be using like Jenkins slave

So, this is the machine thatI'll be using as Jenkins slave in order to check the IP address

I'll type ifconfig

This is the IP address of that machine just copy it

Now I'll go back to my Jenkins master

And in the host tab, I'll just paste that IP address and over here

You can add thecredentials to do that

Just click on ADD and over here

You can give the user name

I'll give it as root password

That's all just click on ADD

And over here select it

Finally save it

Now it is currently adding the slave in order to see the logs

You canclick on that slave again

Now, it has successfully added that particular slave

Now what I'lldo, I'll show you the logs for that and click on slave

And on the left hand side, you willnotice an option called log just click over there and we'll give you the output

So asyou can see agent has successfully connected and it is online right now

Now what I'lldo, I'll go to my Jenkins slave and I'll show you in slash home slash enter a car that itis added

Let me first clear my terminal now what I'll do, I'll show you the contents ofSlash home slash at Eureka

As you can see that we have successfully added slave dotjar

That means we have successfully added Jenkins slave to our Jenkins Master


This is ordered from 80 Rekha and today's session will focus on what is docker

So without any further Ado let us move forward and have a look at the agenda for today first

We'll see why we need Docker will focus on various problems that industries were facingbefore Docker was introduced after that will understand what exactly Docker is and forbetter understanding of Docker will also look at a Docker example after that will understandhow Industries are using Docker with the case study of Indiana University

Our fifth topicwill focus on various Docker components, like images containers Etc and our Hands-On partwill focus on installing WordPress and phpmyadmin using Docker compose

So we'll move forwardand we'll see why we need Docker

So this is the most common problem that industrieswere facing as you can see that there is a developer who has built an application thatworks fine in his own environment

But when it reach production there were certain issueswith that application

Why does that happen that happens because of difference in theComputing environment between deaf and product I'll move forward and we'll see the secondproblem before we proceed with the second problem

It is very important for us to understand

What a microservices consider a very large application that application is broken downinto smaller Services

Each of those Services can be termed as micro services or we canput it in another way as well microservices can be considered a small processes that communicateswith each other over a network to fulfill one particular goal

Let us understand thiswith an example as you can see that there is an online shopping service application

It can be broken down into smaller micro services like account service product catalog cardserver and Order server Microsoft was architecture is gaining a lot of popularity nowadays evengiants like Facebook and Amazon are adopting micro service architecture

There are threemajor reasons for adopting microservice architecture, or you can say there are three major advantagesof using Microsoft's architecture first

There are certain applications which are easierto build and maintain when they are broken down into smaller pieces or smaller Services

Second reason is suppose if I want to update a particular software or I want a new technologystack in one of my module on one of my service so I can easily do that because the dependencyconcerns will be very less when compared to the application as a whole

Apart from thatthe third reason is if any of my module of or any of my service goes down, then my wholeapplication remains largely unaffected

So I hope we are clear with what our micro servicesand what are their advantages so we'll move forward and see what are the problems in adoptingthis micro service architecture

So this is one way of implementing microservice architectureover here, as you can see that there's a host machine and on top of that host machine thereare multiple virtual machines each of these virtual machines contains the dependenciesfor one micro service

So you must be thinking what is the disadvantage here? The major disadvantagehere is in Virtual machines

There is a lot of wastage of resources resources such asRAM processor disk space are not utilized completely by the micro service which is runningin these virtual machines

So it is not an ideal way to implement microservice architectureand I have just given you an example of five microservices

What if there are more than5 micro Services what if your application is so huge that it requires? Microsoft versusso at that time using virtual machines doesn't make sense because of the wastage of resources

So let us first discuss the implementation of microservice problem that we just saw

So what is happening here

There's a host machine and on top of that host machine

There'sa virtual machine and on top of that virtual machine, there are multiple Docker containersand each of these Docker containers contains the dependencies 41 Microsoft Office

So youmust be thinking what is the difference here earlier? We were using virtual machines

Now,we are using our Docker containers on top of virtual machines

Let me tell you guysDocker containers are actually lightweight Alternatives of virtual machines

What doesthat mean in Docker containers? You don't need to pre-allocate any Ram or any disk space

So it will take the RAM and disk space according to the requirements of applications

All right

Now, let us see how Dockers all the problem of not having a consistent Computing environmentthroughout the software delivery life cycle

Let me tell you first of all Docker containersare actually developed by the developers

So now let us see how Dockers all the firstThat we saw where an application works fine and development environment but not in production

So Docker containers can be used throughout the SCLC life cycle in order to provide consistentComputing environment

So the same environment will be present in Dev test and product

Sothere won't be any difference in the Computing environment

So let us move forward and understandwhat exactly Docker is

So the docker containers does not use the guest operating system

Ituses the host operating system

Let us refer to the diagram that is shown

There is thehost operating system and on top of that host operating system

There's a Docker engineand with the help of this Docker engine Docker containers are formed and these containershave applications running in them and the requirements for those applications such asall the binaries and libraries are also packaged in the same container

All right, and therecan be multiple containers running as you can see that there are two containers here1 & 2

So on top of the host machine is a docker engine and on top of the docker enginethere are multiple containers and Each of those containers will have an applicationrunning on them and whatever the binaries and library is required for that applicationis also packaged in the same container

So I hope you are clear

So now let us move forwardand understand Docker in more detail

So this is a general workflow of Docker or you cansay one way of using Docker over here

What is happening a developer writes a code thatdefines an application requirements or the dependencies in an easy to write Docker fileand this Docker file produces Docker images

So whatever dependencies are required fora particular application is present inside this image and what our Docker containersDocker containers are nothing but the runtime instance of Docker image

This particularimage is uploaded onto the docker Hub

Now, what is Docker Hub? Docker Hub is nothingbut a git repository for Docker images it contains public as well as private repositories

So from public repositories, you can pull your image as well and you can upload yourown images as well on to the docker Hub

All right from Docker Hub various teams such asQA or production

We'll pull the image and prepare their own containers as you can seefrom the diagram

So what is the major advantage we get through this workflow? So whateverthe dependencies that are required for your application is actually present throughoutthe software delivery life cycle

If you can recall the first problem that we saw thatan application works fine in development environment, but when it reaches production, it is notworking properly

So that particular problem is easily resolved with the help of this particularworkflow because you have a same environment throughout the software delivery lifecyclebe Dev test or product will see if a better understanding of Docker a Docker example

So this is another way of using Docker in the previous example, we saw that Docker imageswere used and those images were uploaded onto the docker Hub

I'm from Doc and have variousteams were pulling those images and building their own containers

But Docker images arehuge in size and requires a lot of network bandwidth

So in order to say that Networkbandwidth, we use this kind of a work flow over here

We use Jenkins server

Or any continuousintegration server to build an environment that contains all the dependencies for a particularapplication or a Microsoft Office and that build environment is deployed onto variousteams, like testing staging and production

So let us move forward and see what exactlyis happening in this particular image over here developer has written complex requirementsfor a micro service in an easy to write dockerfile

And the code is then pushed onto the get repositoryfrom GitHub repository continuous integration servers

Like Jenkins will pull that codeand build an environment that contains all they have dependencies for that particularmicro service and that environment is deployed on to testing staging and production

So inthis way, whatever requirements are there for your micro service is present throughoutthe software delivery life cycle

So if you can recall the first problem we're applicationworks fine in Dev, but does not work in prod

So with this workflow we can completely removethat problem because the requirements for the Microsoft Office is present throughoutThe software delivery life cycle and this image also explains how easy it is to implementa Microsoft's architecture using Docker now, let us move forward and see how Industriesare adopting Docker

So this is the case study of Indiana University before Docker

Theywere facing many problems

So let us have a look at those problems one by one

The firstproblem was they were using custom script in order to deploy that application onto variousvm's

So this requires a lot of manual steps and the second problem was their environmentwas optimized for legacy Java based applications, but they're growing environment involves newproducts that aren't solely java-based

So in order to provide these students the bestpossible experience, they needed to began modernizing their applications

Let us moveforward and see what all other problems Indiana University was facing

So in the previousproblem of dog, Indiana University, they wanted to start modernizing their applications

Sofor that they wanted to move from a monolithic architecture to a Microsoft Office architectureand the previous slides

We also saw that if you want to update a particular technologyin one of your micro service it is easy to do that because will be very less dependencyconstrains when compared to the whole application

So because of that reason they wanted to startmodernizing their application

They wanted to move to a micro service architecture

Letus move forward and see what are the other problems that they were facing Indiana Universityalso needed security for their sensitive student data such as SSN and student health care data

So there are four major problems that they were facing before Docker now, let us seehow they have implemented Docker to solve all these problems the solution to all theseproblems was docker Data Center and Docker data center has various components, whichare there in front of your screen first is universal control plane, then comes ldap swarm

CS engine and finally Docker trusted registry now, let us move forward and see how theyhave implemented Docker data center in their infrastructure

This is a workflow of howIndiana University has adopted Docker data center

This is dr

Trusted registry

It isnothing but the storage of all your Docker images and each of those images contain thedependencies 41 Microsoft Office as we saw that the Indiana University wanted to movefrom a monolithic architecture to a Microsoft is architecture

So because of that reasonthese Docker images contain the dependencies for one particular micro service, but notthe whole application

All right, after that comes universal control plane

It is usedto deploy Services onto various hosts with the help of Docker images that are storedin the docker trusted registry

So it obscene can manage their entire infrastructure fromone single place with the help of universal control plane web user interface

They canactually use it to provision Docker installed software on various hosts, and then deployapplications without doing a lot Of manual steps as we saw in the previous slides thatIndiana University was earlier using custom scripts to deploy our application onto VMSthat requires a lot of manual steps that problem is completely removed here when we talk aboutsecurity the role based access controls within the docker data center allowed Indiana Universityto Define level of access to various themes

For example, they can provide read-only accessto Docker containers for production team

And at the same time they can actually provideread and write access to the dev team

So I hope we all are clear with how Indiana Universityhas adopted Docker data center will move forward and see what are the various Docker components

First is Docker registry Docker registry is nothing but the storage of all your Dockerimages your images can be stored either in public repositories or in private repositories

These repositories can be present locally or it can be present on the cloud dog

A providesa cloud hosted service called Docker Hub Docker Hub as public as well as private repositoriesfrom public repositories

You can actually pull an image and prepare your own containersat the same time

You can write an image and upload that onto the docker Hub

You can uploadthat into your private repository or you can upload that on a public repository as well

That is totally up to you

So for better understanding of Docker Hub, let me just show you how itlooks like

So this is how a Docker Hub looks like

So first you need to actually sign inwith your own login credentials

After that

You will see a page like this, which sayswelcome to Docker Hub over here, as you can see that there is an option of create repositorywhere you can create your own public or private repositories and upload images and at thesame time

There's an option called explore repositories this contains all the repositories

These which are available publicly

So let us go ahead and explore some of the publiclyavailable repositories

So we have a repositories for nginx reddish Ubuntu then we have Dockerregistry Alpine Mongo my SQL swarm

So what I'll do I'll show you a centralized repository

So this is the centralized repository which contains the center West image

Now, whatI will do later in the session, I'll actually pull a centralized image from Docker Hub

Now, let us move forward and see what our Docker images and containers

So Docker imagesare nothing but the read-only templates that are used to create containers these Dockerimages contains all the dependencies for a particular application or a Microsoft Office

You can create your own image and upload that onto the docker Hub

And at the same timeyou can also pull the images which are available in the public repositories and the in DockerHub

Let us move forward and see what our Docker containers Docker containers are nothingbut the runtime instances of Docker images it contains everything that is required torun an application or a Microsoft Office and at the same time

It is also possible thatmore than one image is required to create a one container

Alright, so for better understandingof Docker images and Docker containers, what I'll do on my Ubuntu box, I will pull a sin2x image and I'll run a sin to waste container in that

So let us move forward and firstinstall Docker in my Ubuntu box

So guys, this is my Ubuntu box over here first

I'llupdate the packages

So for that I will type sudo apt-get update

asking for password itis done now

Before installing Docker

I need to install the recommended packages for that

I'll type sudo

Apt get install

Line-X - image - extra - you name space - are and now a lineirks - image - extra - virtual and here we go

Press why? So we are done with the prerequisite

So let us go ahead and install Docker for that

I'll type sudo

apt-get install Docker- engine so we have successfully installed Docker if you want to install Docker and sendtwo ways

You can refer the center is Docker installation video

Now we need to start thisdocker servicer for that

I'll type sudo service docker start

So it says the job is alreadyrunning


What I will do I will pull us into his image from Docker Hub and I willrun the center waste container

So for that I will type sudo

Docker pull and the nameof the image

That is st

OS the first it will check the local registry for Centos image

If it doesn't find there then it will go to the docker hub for st

OS image and it willpull the image from there

So we have successfully pulled us into his image from Docker Hub

Now, I'll run the center as container

So for that I'll type sudo Docker Run - it sentOS that is the name of the image

And here we go

So we are now in the Centre ice container

Let me exit from this

Clear my terminal

So let us now recall what we did first

Weinstalled awkard on open to after that

We pulled sent to his image from Docker Hub

And then we build a center as container using that Center West image now

I'll move forwardand I'll tell you what exactly Docker compose is

So let us understand what exactly Dockercompose is suppose you have multiple applications on various containers and all those containersare actually linked together

So you don't want to actually execute each of those containersone by one but you want to run those containers at once with a single command

So that's whereDocker compose comes into the picture with Docker compose

You can actually run multipleapplications present on various containers with one single command that is docker - composeup as you can see that there is an example in front of you imagine you're able to Definethree containers one running a web app another running a post Kris

And another running ared is in a uml file that is called Docker compose file

And from there

You can actuallyexecute all these three containers with one single command

That is Takin - compose uplet us understand this with an example suppose

You want to publish a Blog for that you'lluse CMS and WordPress is one of the most widely used CMS so you need one

Default WordPressand you need one more container for my SQL as bakit and that my SQL container shouldbe linked to the WordPress container apart from that

You need one more container forphpmyadmin that should be linked to my SQL database as it is used to access mySQL database

So what if you are able to Define all these three containers in one yamen file and withone command that is docker - composer, all three containers are up and running

So letme show you practically how it is done on the same open to box where I've installedDocker and I've pulled a center s image

This is my Ubuntu box first

I need to installDocker compose here, but before that I need python pip so for that I will type sudo

Optget installed

Titan - VIP and here we go

So it is done now

I will clear my terminaland now I'll install Docker compose for that

I'll type sudo VIP install Docker - composeand here we go

So Docker compose is successfully installed

Now I'll make a directory and I'llname it as WordPress mkdir WordPress

Now I'll enter this WordPress directory

Now overhere, I'll edit Docker - compose dot HTML file using G edit

You can use any other editorthat you want

I'll use G edit

So I'll type sudo G edit Docker - compose dot HTML andhere we go

So what here what I'll do, I'll first open a document

And I'll copy thisyeah Mel code

And I will paste it here

So let me tell you what I've done first

I havedefined a container as and I'm named it as WordPress

It is built from an image WordPressthat is present on the docker Hub

But this WordPress image does not have a database

So for that I have defined one more container and I've named it as WordPress underscoreDB

It is actually built from the image that is called Maria DB which is present in theword press and I need to link this WordPress underscore DB with the WordPress container

So for that I have written links WordPress underscore DB: my SQL

All right, and in thepost section this port 80 of the docker container will actually be linked to Port eight zeroeight zero of by host machine

So are we clear till here now? What I've done I've defineda password here as a deer a cow

You can give whatever password that you want and have definedone more container called phpmyadmin

This container is built from the image corbino's/ talker - phpmyadmin that is present on the docker Hub again

I need to link this particularcontainer with WordPress underscore DB container for that

I have written links WordPress underscoreDB: my SQL and the port section the port 80 of my Docker container will actually be linkedto Port 80 181 of the host machine and finally I've given a username that is root and I'vegiven a password as Ed Eureka

So let us now save it and we'll quit Let me first clearmy terminal

And now I run a command sudo Docker - compose

Up - D and here we go

Sothis command will actually pull all the three images and we'll build the three containers

So it is done now

Let me clear my terminal

Now what I'll do, I'll open my browser andover here

I'll type the IP address of my machine or I can type the hostname as well

First name of my machine is localhost

So I'll type localhost and put a zero eight zerothat I've given for WordPress

So it will direct you to a WordPress installation pageover here

You need to fill this particular form, which is asking you for site title

I'll give it as editor acre username

Also, I will give as edureka password

I'll typearea Rekha confirm the use of weak password then type your email address and it is askingsearch engine or visibility which I want

So I want click here and finally, I'll clickon install WordPress

So this is my WordPress dashboard and WordPress is now successfullyinstalled

Now what I'll do, I'll open one more top on over here

I'll type localhostor the IP address of a machine and I'll go to Port 80 1814 phpmyadmin

And over here,I need to give the user name

If you can recall

I've given route and password has given asa do Rekha and here we go

So PHP, my admin is successfully installed

This phpmyadminis actually used to access a my SQL database and this my SQL database is used as back-endfor WordPress

If you've landed on this video, then it's definitely because you want to installa Kubernetescluster at your machine

Now, we all know how tough the installation processis hence this video on our YouTube channel

My name is Walden and I'll be your host fortoday

So without wasting any time let me show you what are the various steps that wehave to follow


There are various steps that we have to run both at the Masters andand the slave end and then a few commands only at the master sent to bring up the clusterand then one command which has to be run at all the slave ends so that they can join thecluster


So let me get started by showing you those commands on those installation steps,which have to be run commonly on both the Masters and and the slave and first of all,we have to update your repository

Okay, since I am using Ubuntu To update my app to getrepository

Okay, and after that we would have to turn up this vapp space be the Mastersend or the slaves and communities will not work if the swap space is on

Okay, we haveto disable that so there are a couple of commands for that and then the next part is you haveto update the hostname the hosts file and we have to set a static IP address for allthe nodes in your cluster

Okay, we have to do that because at any point of time if yourmaster or if your node in the cluster of fails, then when they restart they should have thesame IP address if you have a dynamic IP address and then if they restart because of a failurecondition, then it will be a problem because they are not be able to join the cluster becauseyou'll have a different IP address

So that's all you have to do these things

All right,there are a couple of commands for that and after that we have to install the opensshserver and docker that is because Humanity's requires the openssh functionality and itof course needs Docker because everything in kubernetes is containers, right? So weare going to make use of Docker containers

So that's why we have to install these twocomponents and finally we have to install Q barium

You're black and you have cerealnow

These are the core components of your Kubernetes

All right

So these are the variouscomponents that have to be installed on both your master and your slave and so let me firstof all open up my VMS and then show you how to get started now before I get started

Letme tell you one thing

You have a cluster you have a master and then you have slavesin that cluster, right? Your master should always have better configurations than yourslave

So for that reason, if you're using virtual machines on your host, then you haveto ensure that your master has at least 2 GB of RAM and to core CPUs

Okay, and yourslave has 2GB of RAM and at least one core CPU

So these are the basic necessities foryour master and slave machines on that note

I think I can get started

So first of all,I'll bring up my virtual machine and go through these installation processes

So I hope everyonecan see my screen here

This is my first VM and what I'm going to do is I'm going to makethis my master

Okay, so all the commands to install the various components are presentwith me in my notepad Okay, so I'm going to use this for reference and then quickly executethese commands and show you how communities is installed

So first of all, we have toupdate our Advocate repository

Okay, but before that, let's log in as s you okay, soI'm going to do a sudo OSU so that I can execute all the following commands as pseudo user


So so to OSU there goes my root password and now you can see the difference here righthere

I was executing it as a normal user, but from here am a root user

So I'm goingto execute all these commands as s you so first of all Let's do an update

I'm goingto copy this and paste it here apt-get update update my Ubuntu repositories

All right,so it's going to take quite some time

So just hold on till it's completed


Sothis is done

The next thing I have to do is turn off my swap space


Now the commandto disable my strap space is swap off space flag a let me go back here and do the same

Okay swap off but flag

And now we have to go to this FS tab

So this is a file calledFS tap OK and we will have a line with the entry of swap space because at any point oftime if you have enabled swap space, then you will have a line over there

Now we haveto disable that line

Okay, we can disable that line by commenting out that line

Solet me show you how that's done

I'm just using the Nano Editor to open this fstab file

Okay, so you can see this land right where it says swap file

This is the one which aftercomment out

So just let me come down here and comment it out like this

Okay with thehash now, let me save this and exit

Now the next thing after do is update my host nameand my hosts file and then set a static IP address

So let me get started by first updatingthe hostname

So for that I have to go to this file host name, which is in this /hcpath

So I'm again using Nano for that

You can see here

It's a director - virtualbox,right? So let me replace this and say okay Master as in Cuba not he's master

So letme save this and exit now if you want your host name to reflect over here because rightnow it says root at the rate at Oracle virtualbox the host name is does not look updated asyet and if you want it to be updated to k Master, then you have to first of all restartthis VM or your system

If you're doing it on a system, then you have to restart yoursystem

And if you do it on a VM, you have to restart your VM

Okay, so let me restartmy VM in some time

But before that there are a few more commands, which I want to runand that is set a static IP address

Okay, so I'm going to copy this if conflict I'mgoing to run this config command Okay

So right now my IP address is one ninety twodot one sixty eight dot 56

1 not one and the next time when I turn on this machine, I donot want a different IP address

So to set this as a static IP address

I have a coupleof commands

Let me execute that command first

So you can see this interface is file

Right?So under SC / Network, we have a file called interfaces

So this is where you define allyour network interfaces

Now, let me enter this file and add the rules to make it staticIP address as you can see here

The last three lines are the ones which ensure that thismachine will have a static IP address

These three lines are already there on my machine

Now if you want to set a static IP address of your and then make sure that you have thesethings defined correctly


My IP address is not one not one

So I would just read init like this

So let me just exit

So the next thing that I have to do is go to thehosts file and update my IP address over there

Okay, so I'm going to copy this and go tomy Etsy / hosts files now over here

You can see that there is no entry

So after mentionthat this is Mike a master

So let me specify my IP address first

This is my IP addressand now we have to update the name of the host

So this host of - Kay Master so I'mjust going to enter that and save this


Now the thing that we have to do now is restartthis machine

So let me just reset this machine and get back to you in the meanwhile


So now that we are back on let me check if my host name and hosts have all been updated


There you go

You can see here, right it recorded k Master

So this means that myhost name has been successfully updated we can also verify my IP address is the samelet me do an if config and as you can see my appearance has not changed

All right,so this is good


This is what we wanted


Let's continue with our installationprocess

Let me clear the screen and go back to the notepad and execute those commandswhich first of all install my openssh server

So this is going to be the command to do thatand we have to execute this as pseudo user

Right so sudo apt-get install openssh server

That's the command

Okay, let me say yes and enter


So my SSH server would have beeninstalled by now that makes clear the screen and install Docker

But before I run thiscommand which installs Dhaka and it will update my repository

Okay, so let me log in as pseudofirst fault

Okay, so do is use the command and okay I have logged in as root user


The next thing is update my repository so after do an update update

Now again, thisis going to take some more time

So just hold on till then

Okay, this is also done

Nowwe can straight away run the command to install Docker


This is the command to installDocker

Okay from the aggregate repository

I'm installing Docker and this specifying- why because - why is my flag? So whenever there's a problem that comes in while installationsaying do you want to install it? Yes or no, then when you specify - why then it meansthat by default it will accept why as a parameter

Okay, so that is the only constant behind- why so again inserting Dockers going to take a few more minutes

Just hang on tillthen

Okay, great

So Docker is also installed


So let me go back to the notepad

Soto establish the Kubernetes environment the three main components that Kubernetes is madeup of RQ barium cubelet and Cube cereal, but just before I install these three componentsthere are a few things I have to do they are like installing curl and then downloadingcertain packages from this URL and then running an update


So let me execute these commandsone after the other first and then install Kubernetes

So let's first of all start withthis command where I'm installing curl


Now the next command is basically downloadingthese packages using curl and curl is basically this tool using which you can download thesepackages using your command line


So this is basically a web URL right so I canaccess whatever packages are there on this web URL and download them using curl

So that'swhy I've installed car in the first place

So when executing this command I get thiswhich is perfect now when I go back then there is this which we have to execute

Okay, letme hit enter and I'm done and finally I have to update my app get repository and commonfor that

Is this one apt-get update? Okay, great

So all the presentation steps are alsodone


I can say to me set up my Kubernetes environment by executing this command

Soin the same command I say install cubelet you barium and Cube CDL and to just avoidthe yes prompt am specifying the - wife lat

Okay, which would by default take yes as aparameter

And of course I'm taking it from the aggregate repository, right? So, let mejust copy this and paste it here

Give it a few more minutes guys because in Sony kubernetesis going to take some time

Okay bingo

So my humanities has also been installed successfully


Let me conclude the setting up of this cube root is environment by updating the communitiesconfiguration


So there's this file

You're right Q beta m dot f so, this is thecube ADM is the one that's going to let me administer my Kubernetes

So after go to thisfile and add this one line, okay, so let me first of all open up this file using my Nanoeditor

So let me again log in as soda OSU and this is the command

So as you can seewe have these set of environment variables

So right after the last environment variablehave to add this one line and that line is this one All right

Now, let me just savethis and exit brilliant

So with that the components which have to be installed at boththe master and the slave come to an end


What I will do next is run certain commandsonly at the master to bring up the cluster and then run this one command at all my slavesto join the cluster


So before I start doing anything more over here, let mealso tell you that I have already done the same steps on my node

So if you are doingit at your end, then whatever steps you've done so far run the same set of commands onanother VM because that will be acting as your node v m but in my case, I have alreadydone that just to save some time, you know, so let me show you that this is Mike a masterof and right here

I have my K node, which is nothing but my communities node and I'vebasically run the same set of commands in both the places, but there is one thing whichI have to ensure before I bring up the cluster and that is and short the network IP addressesand the host name and the hosts

So this is my communities node, so All I'm going to dowhat chat and say /hc posts


Now over here

I have the IP address of my Cube ladiesnode

That is this very machine and a specify the name of the host

However, the name ofmy Kubernetes Master host is not present and neither is the IP address

So that is onemanual entry we have to do if you remember let me go to my master on check

What is theIP address? Yes

So the IP address over here is one ninety two dot one sixty eight dot56

1 not one

So this is the IP address

I have to add in my node end

So after modifythis file for that, all right, but before that you have to also ensure that this isa static IP address

So let me ensure that the IP address of my cluster node does notchange

So the first thing we have to do before anything is check

What is the current IPaddress and for my node the IP addresses one? Ninety two dot one sixty eight dot 56

1 notto okay now, let me run this command

Network interfaces


So as you can see here,this is already set to be a static IP address

We have to ensure that these same lines arethere in your machine if you wanted to be a static IP address since it's already therefor me

I'm not going to make any change but rather I'm going to go and check

What's myhost name? I mean the whole same should anyways give the same thing because right now it'skeynote

So that's what it's gonna reflect

But anyways, let me just show it to you

Okay,so my host name is keynote brilliant

So this means that that is one thing which I haveto change and that is nothing but adding the particular entry for my master

So let mefirst clear the screen and then using my Nano editor

In fact, I'll have to run it as pseudo

So as a pseudo user I'm going to open my Nano editor and edit my hosts file

Okay, so herelet me just add the IP address of my master

So what exactly is the IP address of the master?Yes, this is my k Master

So I'm just going to copy this IP address come back here andpaste the IP address and I'm gonna say the name of that particular host is came master

And now let me save this perfect

Now, what I have to do now is go back to my master andensure that the hosts file here has raised about my slave

I'll clear the screen andfirst I'll open up my hosts file

So on my masters and the only entry is there for themaster

So I have to write another line where that specify the IP address or my slave andthen add the name of that particular host

That is K node

And again, let me use theNano editor for this purpose

So I'm going to say sudo Nano /hc posts

Okay, so I'm goingto come here say one ninety two dot one sixty eight dot 56

1 not to and then say Okay node

All right

Now all the entries are perfect

I'm going to save this and Exit so the hostsfile on both my master and my slave has been updated the static IP address for both mymaster and the slave has been updated and also the kubernetes environment has been established


Now before we go further and bring up the cluster, let me do a restart because I'veupdated my hosts file


So let me restart both of my master and my slave VMS and ifyou're doing it at your and then you have to do the very same, okay, so let's say restartand similarly

Let me go to my load here and do a restart

Okay, so I've just logged inand now that my systems are restarted

I can go ahead and execute the commands at onlythe Masters and to bring up the cluster


So first of all, let me go through the stepswhich are needed to be run on the Masters end

So add the master of first of all, wehave to run a couple of commands to initiate the Kubernetes cluster and then we have toinstall a pod Network

We have to install a pod Network because all my containers insidea single port will have to communicate over a network Port is nothing but a network ofcontainers

So there are various container networks, which I can use so I can use theCalico poor Network

I can use a flannel poor Network or I can use anyone you can see theentire list in the communities documentation

And in this session, I am going to use thecalcio network

Okay, so that's pretty simple and straightforward and that's what I'm goingto show you next

So once you've set up the Pod Network, you can straight away bring upthe communities dashboard and remember that you have to set up the communities dashboardand bring this up before your notes join the cluster because in this version of Cuba Nettie'sif you first get your notes to join the cluster and after that if you try bringing the kubernetesdashboard up then your communities dashboard gets hosted on the And you don't want thatto happen, right? If you want the dashboard to come up at your Masters and you have tobring up the dashboard before your nodes join the cluster

So these would be the three commandsthat we will have to run initiating the cluster of inserting the poor Network and then settingup the Kubernetes dashboard

So let me go to my master and execute commands for eachof these processes

So I suppose this is my master

And yes, this is my k Master

So sofirst of all to bring up the cluster we have to execute this command

Let me copy thisand over here

We have to replace the IP addresses

So the IP address of my master, right? Sothis machine after specified that IP address over here because this is where the otherIP addresses can come and join This is the master right? So I'm just seeing a pi serveradvertise the address 56

1 not one so that all the other nodes can come and join thecluster on this IP address and along with this

I have to also specify the port Networksince I've chosen the Calico poor Network

There is a network range which my Calico poorNetwork uses so a cni basically stands for container network interface

If I'm usingthe Calico poor Network then after use this network range, but in case of few want touse a flannel poor Network, then you can use this network range

Okay, so let me just copythis one and paste it

All right

So the command is pseudo Cube ADM in it for Network followedby the IP address from where the other nodes will have to join

So let's go ahead and enterSo since you're doing for the first time give it a few minutes because kubernetes take sometime to install

Just hold on until that happens

All right

Okay, great

Now it says that yourkubernetes master has initialized successfully that's good news

And it also says that tostart using your cluster

We need to run the following commands as a regular user

Okay,so we'll note that log out as a pseudo user and as a regular user executes these threecommands and also if I have to deploy a poor Network then after run a command, okay

Sothis is that command which I have to run to bring up my poor Network

So I'll be basicallycloning the yamen file which is present over here

So before I get to all these thingslet me show you that we have a cube joint command which is generated

Right? So thisis generated in my masters and and I have to execute this command at my node to jointhe cluster, but that would be the last step because like I said earlier these three commandswill have to be first executed then after bring up my poor Network then after bringup my dashboard and then I have to get my notes to join the class are using this command

So for my reference, I'm just going to copy this command and store it somewhere else


So right under this Let me just do this command for later reference

And in the meanwhile,let me go ahead and execute all these commands one after the other

These are as per Cubeentities instructions, right? Yes

I would like to rewrite it

And then okay

Now thatI've done with this let me first of all bring up my pod Network


Now the command tobring up my pod network is this Perfect

So my calcio pod has been created now I can verifyif my poor has been created by running the cube CDL get pods command


So this ismy Cube serial get pods

I can say - oh wide all namespaces

Okay by specifying the - ohwide and all namespaces

I'll basically get all the pods ever deployed

Even the defaultpose with get deployed when the Kubernetes cluster initiates

So basically the kubernetescluster is initiated and deployed along with a few default ones especially for your poorNetwork

There is one part which is hosted for your cluster

There's one pod For YourRocker board itself, and then there's one pot which is deployed for your dashboard andwhatnot

So this is the entire list, right? So if you're calcio for your SED, there'sone pod for your Cube controller

There's a pot and we have various spots like thisright for your master and you're a pi server and many things

So these are the defaultdeployments that you get So anyways, as you can see the default deployments are all healthybecause it says the status is all running and everything is basically you're runningin the cube system namespace

All right, and it's all running on my k Master That's Mikeunit is master

So the next thing that I have to do is bring up the dashboard before I canget my notes to join

Okay, so I'll go to the notepad and copy the command to bringup my dashboard

So copy and paste so great

This is my communities dashboard, which asyou know, basically this part has come up now

If I execute this same Cube serial, getpods command, then you can see that I've got one more pot which is deployed for my dashboardbasically

So last time this was not there because I had not deployed my dashboard atthat time, right? So I don't need to plug my iPod Network and whatnot and the otherthings right? So I've deployed it and the continuous creating so in probably a few moreseconds, this would also be running anyways in the meanwhile, what we can do is we canwork on the other things which are needed to bring up the dashboard the first fall

Abel your proxy and get it to be hope for web server

There's a skip serial proxy commandOkay

So with this your service would be starting to be served on this particular port number

Okay, localhost port number eight thousand one of my master

Okay, not from the nodes

So if I could just go to my Firefox and go to local Lowe's 8001 then my dad would beup and running over there

So basically my dashboard is being served on this particularport number

But if I want to actually get my dashboard which shows my deployments andon my services then that's a different URL


So yeah as you can see here

Localized8,000 / API slash V 1 right this entire URL is which is going to lead me to my dashboard

But at this point of time I cannot log into my dashboard because it's prompting me fora token and I do not have a token because I have not done any cluster old binding andI have not mentioned that I am the admin of this particular dashboard

So to enable allthose things there are a few more commands that we have to execute starting with creatinga service account for your dashboard

So this is the command to create your service account

So go back to the terminal and probably a new terminal window execute this command Okay

So with this you're creating a service account for your dashboard, and after that you haveto do the cluster roll binding for your newly created service account


So the dashboardhas been created and default namespace as per this

Okay, and here I'm saying that mydashboard is going to be for admin and I'm doing the cross the road binding

Okay, andnow that this is created I can straight away get the token because if you remember it'sasking me for a token to login, right? So even though I am the admin now have a notbe able to log in without D token, so to generate the token I have to again run this commandCube City will get secret key

Okay, so I'm going to copy this and paste it here

So thisis the token or this is the key that basically needs to be used

So let me copy this entiretoken and paste it over here

So let me just save this and yeah, now you can see that mycommunity's cluster has been set up and I can see the same thing from the dashboardover here

So basically by default the communities service is deployed

Right? So this is whatyou can see but I've just brought the dashboard now and the cluster is not ready under mynodes join in

So let's go to the final part of this demonstration

We're in I'll ask myslaves to join the cluster

So you remember I copied the joint cluster which was generatedat my Master's end in my notepad

So I'm going to copy that and execute that at the slavesand to join the cluster


So let me first of all go to my notepad and yeah, this isthe joint command which I had copyright

So I'm going to copy this and now I'm going togo to my node


So, let me just paste this and let's see what happens

Let me justrun this command as pseudo

It's a perfect

I've got the message that I have successfullyestablished connection with the API server on this particular IP address and port number,right? So this means that my node has joined the cluster we can verify that from the dashboarditself

So if I go back to my dashboard, which is hosted on my master master Zen, so I havean option here as nodes

If I click on this then I will get the details about my nodesover here

So earlier I only have the keymaster but now I have both the key master and theK node give it a few more seconds until my note comes up

I can also verify the samefrom my terminal

So if I go to my terminal here and if I run the command Cube CTL getnodes then if we give me the details about the nodes which are there in my cluster soaka master is one that is already there in the cluster but cannot however will take somemore time to join my cluster

Alright, so that's it guys

So that is about my deploymentand that's how you deploy a community's cluster

So from here on you can do whatever deploymentyou want

Whatever you want to deploy you can deploy it

Easily very effectively eitherfrom the dashboard or from the CLI and there are various other video tutorials of ours,which you can refer to to see how a deployment is made on Kubernetes

So I would requestyou to go to the other videos and see how deployment is made and I would like to concludethis video on that note

If you're a devops guy, then you would have definitely heardof communities but I don't think the devops world knows enough of what exactly kubernetesis and where it's used

And that's why we had Erica of come up with this video on whatis communities

My name is Walden and I'll be representing a tárrega in this video

And as you can see from the screen, these will be the topics that we'll be coveringin today's session as first start off by talking about what is the need for communities? Andafter that I will talk about what exactly it is and what it's not and I will do thisbecause there are a lot of myths surrounding communities and there's a lot of confusionpeople have misunderstood communities to be a containerization platform

Well, it's notokay

So I will explain what exactly it is over here

And then after that I will talkabout how exactly communities works

I will talk about the architecture and all the relatedthings

And after that I will give you a use case

I will tell you how communities wasused at Pokemon go and how it helped Pokemon go become one of the best games of the year2017 And finally at the end of the video, you will get a demonstration of how to dodeployment with Kubernetes


So I think the agenda is pretty clear you I think wecan get started with our first topic then now first topic is all about

Why do we needKubernetes? Okay now to understand why do we need Cuba Nettie's let's understand whatare the benefits and drawbacks of containers

Now, first of all containers are good

Theyare amazingly good right any container for that matter of fact a Linux container or aDocker container or even a rocket Continuum, right? They all do one thing they packageyour application and isolated from everything else, right? They isolate the applicationfrom the host mainly and this makes the container of fast reliable efficient light weight andscalable now hold the thought yes containers are scalable, but then there's a problem thatcomes with that and this is what is the resultant of the need for Kubernetes even though continuesare scalable

They are not very easily scalable

Okay, so let's look at it this way

You haveone container you might want to probably scale it up to to contain over three containers

Will it's possible right? It's going to take a little bit of manual effort

But yeah, youcan scale it up

You know what I have a problem

But then look at a real world scenario whereyou might want to scale up to like 5200 containers then in that case what happens I mean afterscaling up, would you do you have to manage those containers? Right? We have to make surethat they are all working

They are all active and they're all talking to each other becauseif they're not talking to each other then there's no point of scaling up itself becausein that case the server's would not be able to handle the roads if they're not able totalk to each other correct

So it's really important that they are manageable when theyare scaled up and now let's talk about this point

Is it really tough to scale up containers?Well the answer for that might be know

It might not be tough

It's pretty easy to scaleup containers, but the problem is what happens after that

Okay, once you scale up containers,you will have a lot of problems

Like I told you the containers first for should have tocommunicate with each other because Not so many in number and they work together to basicallyhost the service right the application and if they are not working together and talkingtogether then the application is not hosted and scaling up is a waste so that's the numberone reason and the next is that the containers have to be deployed appropriately and theyhave to also be managed they have to be deployed appropriately because you cannot have thecontainers deployed in this random places

You have to deploy them in the right places

You cannot have one container in one particular cloud and the other one somewhere else

Sothat would have a lot of complications

Well, of course it's possible

But yeah, it wouldlead to a lot of complications internally you want to avoid all that so you have tohave one place where everything is deployed appropriately and you have to make sure thatthe IP addresses are set everywhere and the port numbers are open for the containers totalk to each other and all these things


So these are the two other points the nextPoint our the next problem with scaling up is that auto scaling is never a functionalityover here? Okay, and this is one of the things which is the biggest benefit with Cuba Nets

The problem technically is there is no Auto scaling functionality

Okay, there's no conceptof that at all

And you may ask at this point of time

Why do we even need auto-scaling?Okay, so let me explain the need for auto scaling with an example

So let's say thatyou are an e-commerce portal

Okay, something like an Amazon or a flip card and let's saythat you have decent amount of traffic on the weekdays, but on the weekends, you havea spike in traffic

Probably you have like 4X or 5x the usual traffic and in that casewhat happens is maybe your servers are good enough to handle the requests coming in onweekdays, right? But the requests that come on the weekends right from the increased trafficthat cannot be serviced by our servers right? Maybe it's too much for your servers to handlethe load and maybe in the short term

It's fine maybe once or twice you can survive butthey will definitely come a time when your server will start crashing because it cannothandle that many requests per second or permanent

And if you want to really avoid this problemwhat you do you have to scale up and now would you Lead keep scaling up every weekend andscaling down after the weekend, right? I mean technically is it possible? Will you be buyingyour servers and then setting it up and every Friday would you be again by new Star Warssetting up your infrastructure? And then the moment your weekday starts

Would you justdestroy all your servers? Whatever you build

Would that would you be doing? No, right?Obviously, that's a pretty tedious task

So that's where something like Cuban Aires comesin and what communities does is it keeps analyzing your traffic and the load that's being usedby the container and as and when the traffic is are reaching the threshold auto-scalinghappens where if the server's have a lot of traffic and if it needs no more such serversfor handling requests, then it starts killing of the containers on its own

There is nomanual intervention needed at all

So that's one benefit with Kubernetes and one traditionalproblem that we have with scaling up of containers

Okay, and then yeah, the one last problemthat we have is the distribution of traffic that is still challenging without somethingthat can manage your containers

I mean you have so many containers, but how will thetraffic be distributed? Load balancing

How does that happen? You just have containersright? You have 50 containers

How does the load balancing happen? So all these are questions

We should really consider because containerization is all good and cool

It was much better thanVMS

Yes containerization

It was basically a concept which was sold on the basis of forscaling up

Right? We said that vm's cannot be scaled up easily

So we told use containersand with containers you can easily scale up

So that was the whole reason we basicallysold containers with the tagline of scaling up

But in today's world, our demand is evermore that even the regular containers cannot be enough so scaling up a so much or and sodetailed that we need something else to manage your containers, correct

Do we agree thatwe need something right? And that is exactly what Cuban Aries is

So Kubernetes is a containermanagement tool

All right

So this is open source and this basically automate your containerdeployment your continue scaling and descaling and your continual load balancing the benefitwith this is that it works brilliantly with all the cloud vendors with all A big cloudvendors or your hybrid Cloud vendors and it also works on from Isis

So that is one bigselling point of kubernetes

Right? And if I should give more information about communitiesthen let me tell you that this was a Google developed product


It's basically abrainchild of Google and that pretty much is the end of the story for every other competitorout there because the community that Google brings in along with it is going to be hugeor basically the Head Start that communities would get because of being a Google brainchild is humongous

And that is one of the reasons why kubernetes is one of the bestcontainer management tools in the market period and given that communities is a Google product

They have written the whole product on go language

And of course now Google has contributedthis whole communities project to the CN CF which is nothing but the cloud native ComputingFoundation or simply Cloud native Foundation, right? You can just call them either thatand they have donated their open source project to them

And if I have to just summarize whatHumanities is you can just think of it like this it can group like a number

Containersinto one logical unit for managing and deploying an application or a particular service

Sothat's a very simple definition of what communities is

It can be easily used for deploying yourapplication

Of course

It's going to be Docker containers which you will be deploying

Butsince you will be using a lot of Docker containers as part of your production, you will alsohave to use Kubernetes which will be managing your multiple Docker containers, right? Sothis is the role it plays in terms of deployment and scaling upskilling down is primarily thegame of communities from your existing architecture

It can scale up to any number you want

Itcan scale down anytime and the best part is the scaling can also be set to be automatic

Like I just explained some time back right you can make communities communities wouldanalyze the traffic and then figure out if the scaling up needs to be done or the Skillingnoun can be done and all those things

And of course the most important part load balancing,right? I mean what good is your container or group of containers if load balancing cannotbe enabled right? So communities does that also and these Some of the points on basedon which kubernetes is built

So I'm pretty sure you have got a good understanding ofwhat communities is by now Write a brief idea at least so moving forward

Let's look atthe features of Kubernetes Okay

So we've seen what exactly kubernetes is how wouldusers Docker containers or other connector or containers in general? But now let's seesome of the selling points of humanities or why it's a must for you

Let's start off withautomatic bin packing when we say automatic bin packing

It's basically that communitiespackages your application and it automatically places containers based on their requirementsand the resources that are available

So that's the number one advantage the second thingservice Discovery and load balancing

There is no need to worry

I mean if you know, ifyou're if you're going to use Kubernetes then you don't have to worry about networking andcommunication because communities will just automatically assign containers their ownIP addresses and probably a single DNS name for a set of containers which are performinga logical operation

And of course, there will be loads

Dancing across them so youdon't have to worry about all these things

So that's why we say that there is serviceDiscovery and load balancing with kubernetes and the third feature of kubernetes

Is thatstorage orchestration with communities, you can automatically Mount your storage systemof your choice

You can choose that to be either a local storage or maybe on a publicCloud providers such as a gcp or AWS or even a network storage system such as NFS or otherthings, right? So that was the feature number three now, please remember for self-healingnow, this is one of my favorite parts of Humanity's actually not just communities even with respectto dr


I really like this part of self-healing what self feeling is all about is that wheneverkubernetes realizes that one of your containers has failed then it will restart that containeron its own right and we create a new container in place of this crashed one and in case you'renode itself fails, then what you bilities would do in that case has whatever containerswere running in that failed node

Those containers would be started in another node, right? Ofcourse, you would have to have more In that cluster if there's another node in the clusterdefinitely room would be made for this field container to start a service

So that happensso the next feature is batch execution

So when we say batch execution, it's that alongwith Services Humanities can also manage your batch and CIA work loads, which is more ofa devops roll

Right? So as part of your CIA workloads communities can replace your containerswhich fail and it can restart and restore the original state that is what is possiblewith kubernetes and secret and configuration management

That is another big feature withkubernetes

And that is the concept of where you can deploy and update your secrets andapplication configuration without having to rebuild your entire image and without havingto expose your secrets in your stack configuration or anything, right? So if you want to deployan update your secrets only that can be done

So it's not available with all the other tools,right? So communities is one that does that you don't have to restart everything and rebuildyour entire container

That's one benefit and then we have Horizonte scaling which ofcourse you will My that of already you can scale your applications up and down easilywith a simple command

The simple command can be run on the CLI or you can easily doit on your GUI, which is your dashboard

Your community is dashboard or Auto scaling ispossible Right based on the CPU usage

Your containers would automatically be scaled upor scaled down

So that's one more feature and the fun feature that we have is automaticrollbacks and roll outs now Kubernetes what it does is whenever there's an update yourapplication, which you want to release communities progressively rolls out these changes andupdates to the application or its complications by this ensuring that one instance after theother is send these updates and it makes sure that not all instances are updated at thesame time thus ensuring that yes, there is high availability

And even if something goeswrong, then the Cuban ladies will roll back that change for you

So all these things areenabled and these are the features with Humanities

So if you're really considering a solutionfor your containers from managing your containers, then communities should be your solution

To that should be your answer

So that is about the various features of Kubernetes nowmoving forward here

Let's talk about a few of the myths surrounding communities and weare doing this because a lot of people have confusion with respect to what exactly itis

So people have this misunderstanding that communities is like docker which is a continuationplatform, right? That's what people think and that is not true

So this kind of a confusionis what I intend to solve in the upcoming slides

I will not talk about what exactlykubernetes is and what communities is not let me start with what it's not now

The firstthing is that communities is not to be compared with Docker because it's not the right setof parameters which are comparing them against Docker is a containerization platform anda Kubernetes is a container management platform, which means that once you have containerizedyour application with the help of Docker containers or Linux containers, and when you are scalingup these containers to a big number like 50 or a hundred that's where communities wouldcome in when you have like multiple containers which need to be managed

That's where communitiescan comment and effectively do it

You can specify the configurations and communitieswould make sure that at all times these conditions are satisfied

So that's what community isyou can tell in your configurations that at all time

I want these many containers running

I want these many pods running and so many other needs right you can specify much morethan that and whatever you do at all times your cluster master or your communities Masterwould ensure that this condition is satisfied

So that is what exactly Community is, butthat does not mean that talker does not solve this purpose

So Docker also have their ownplug-in

I wouldn't call it a plug-in

It's actually another tool of there's so there'ssomething called as Docker swamp and Dockers warm does a similar thing it does containa management like Mass container management so similar to what communities does when youhave like 50 to 100 containers Docker swarm would help you in managing those containers,but if you look at who is prevailing in the market today, I would say it's communitiesbecause communities came in first and the moment they came in they were backed by GoogleThey had this huge Community with they just swept along with them

So they have like hardlyleft any in any market for Docker and for dr

Stromm, but that does not mean that theyare better than Docker because they are at the end of the day using Docker

So communitiesis only as good as what Docker is if there are no Docker containers, then there's noneed for communities in the first place

So Cuban adiz and Docker they go hand in hand


So that is the point you have to note and I think that would also explain the pointthat kubernetes is not for continue Rising applications

Right? And the last thing isthat Kubernetes is not for applications with a simple architecture

Okay, if your architecturereview your applications architecture is pretty complex, then you can probably use Cuban IDsto uncomplicate that architecture

Okay, but if you're having a very simple one in thefirst place then using kubernetes would not serve you any good and it could probably makeit a little more complicated than what it already is, right

So this is what kubernetesis not now speaking of what exactly kubernetes is

The first point is Kubernetes is robust

And reliable now when I see a robust and reliable, I'm referring to the fact that the clusterthat is created the communities cluster, right? This is very strong

It's very rigid and it'snot going to be broken easily

The reason being the configurations which is specifiedright at any point of time if any container fails a new container would come up rightor that whole container would be restarted

One of the things will definitely happen

If your node fails then the containers which are running in a particular node

They wouldstart running in a different node, right? So that's why it's reliable and it's strongbecause at any point of time your cluster would be at full force

And at any time ifit's not happening, then you would be able to see that something's wrong and you haveto troubleshoot your node and then everything would be fine

So Cuban, it's would do everythingpossible and it pretty much does everything possible to let us know that the problem isnot at its end and it's giving the exact result that we want

That's what communities aredoing

And the next thing is that Humanity's actually is the best solution for scalingup containers at least in today's

I could because the two biggest players in this marketare radhika swamp and Humanities and Docker swarm is not really the better one here becausethey came in a little late even though doctor was there from the beginning communities cameafter that but doc a swarm which we are talking about came in somewhere around 2016 or 2017

Right? But communities came somewhere around 2015 and they had a very good Head Start

They were the first ones to do this and they're backing by Google is just icing on the cakebecause whatever problem you have with respect to Containers, if you just go up and if youput your error there then you will have a lot of people on github

com and get up queriesand then on stack overflow will be resolving those errors, right? So that's the kind ofMarket they have so it's back be a really huge Community

That's what kubernetes isand to conclude this slide Humanities is a container orchestration platform and nothingelse

All right

So I think these two slides would have given you more information andmore clarity with respect to what kubernetes is

And how different it is from docker anddocker swamp, right? So now moving on let's go to the next topic where we will compareHumanities with DACA swamp and we are comparing with Docker swamp because we cannot compareDocker and Kubernetes head on

Okay, so that is what you have to understand if you arethis person over here if you are Sam who is wondering which is the right comparison thenlet me reassure you that the difference can only be between Humanities and doctors Mom


So let's go ahead and see what the difference is


Let's start off with your installationand configuration


So that's the first parameter will use to compare these two andover here doc a swarm comes out on top because Dockers little easier you have around twoor three commands which will help you have your cluster up and running that includesthe node joining the cluster, right? But with kubernetes it's way more complicated thantalking swamp, right? So you have like close to ten to eleven commands, which you haveto execute and then there's a certain pattern you have to follow to ensure that there areno errors, right? Yes, and that's why I'm consuming and that's why it's complicated

But once your cluster is ready that time kubernetes is the winner because the flexibility therigidness and the robustness that communities gives you cannot be offered by dr


Yes, dr

Storm is faster, but yes not as good as communities when it comes to your actualworking and speaking of the GUI

Once you have set up your cluster or you can use aGOI with communities for deploying your applications

Right? So you don't need to always use yourCLI

You have a dashboard which comes up and the dashboard

If you give it admin privileges,then you can use it

You can deploy your application from the dashboard itself everything justdrag-and-drop click functionality right with just click functionality

You can do that

The same is not the case with Docker swarm

You have no GUI in Dhaka swamp Okay

So docIslam is not the winner over here

It's Kubernetes and he is going to the third parameter scalability

So people again have a bad misconception that communities is better it is the solution forscaling up

And it is better and faster than dr


Well, it could be better but yes,it's not faster than doctors warm

Even if you want to scale up right? There is a reportwhere I recently read that the scaling up in Dhaka swarm is almost five times fasterthan the scaling up with Kubernetes

So that is the difference

But yes, once you are scalingup is done after that your cluster strength with kubernetes is going to be much strongerthan your doctor swamp plus the strength

That's again because of the various configurations

That should have been specified by then

That is the thing now moving on to the next parameterwe have is load balancing requires manual service configuration


This is in caseof kubernetes and yes, this could be shortfall

But with dr

Storm there is inbuilt load balancingtechniques, which you don't need to worry about

Okay, even the load balancing whichrequires manual effort in case of communities is not do much there are times when you haveto manually specify what are your configuration you have to make a few changes but yes, it'snot as much as what you thinking and speaking of updates and rollbacks

What communitiesdoes is it does the Scheduling to maintain the services while updating


Yeah, that'svery similar to how it works of darkness form wherein you have like Progressive updatesand service Health monitoring happens throughout the update, but the difference is when somethinggoes wrong Humanity's goes that extra mile of doing a roll back and putting you backto the previous state right before the update was launched

So that is the thing with kubernetesand the next parameter

We are comparing those two upon is data volumes

So data volumesin Cuba nattie's can be shared with other containers, but only within the same pod,so we have a concept called pods in communities

Okay, now board is nothing but something whichgroups related containers right a logical grouping of containers together

So that isa pot and whichever containers are there inside this pod

They can have a shared volume

Okay,like storage volume, but in case of doctors from you don't have the concept of poor atall

So the shared volumes can be between any other container

There is no restrictionwith respect to that and dr

Swann and then finally we have this All the logging and monitoring

So when it comes to logging and monitoring Humanities provides inbuilt tools for thispurpose

Okay, but with dr

Storm you have to install third-party tools if you want todo logging and monitoring so that is the fall backward


Swann because logging is reallyimportant one because you will know what the problem is

You'll know which card in a failedwhat happened there is exactly the error, right? So logs would help you give that answerand monitoring is important because you have to always keep a check on your nodes, right?So as the master of the cluster it's very important that there's monitoring and that'swhere our communities has a slight advantage over doc a swarm

Okay, but before I finishthis topic there is this one slide

I want to show you which is about the statistics

So this stat I picked it up from this Platform 9, which is nothing but a company that writesabout tech

Okay and what they've said is that the number of news articles there wereproduced right in that one particular year had 90% of those covered on Kubernetes comparedto the 10 percent

It on Docker swamp amazing, right? That's a big difference

That meansfor every one blog written or for everyone article written on Docker swamp

There arenine different articles written on humanities and similarly for web searches for web searcheskubernetes is 90 percent compared to Dhaka swarms 10% and Publications GitHub Stars

The number of commits on GitHub

All these things are clearly one vacuum energy is everywhere

So communities is the one that's dominating this market and that's pretty visible fromthis stat also, right? So I think that pretty much brings an end to this particular topicnow moving forward

Let me show you a use case

Let me talk about how this game thisamazing game called Pokemon go was powered with the help of communities

I'm pretty sureyou all know what it is, right? You guys know Pokemon go

It's the very famous game andit was actually the best game of the year 2017 and the main reason for that being thebest is because of kubernetes and let me tell you why but before I tell you why there arefew things, which I want to just talk about I'll give you an overview of Pokemon goersand let me talk about a few key Stacks

So Pokemon go is an augmented reality game developedby Niantic for your Android and for iOS devices

Okay, and those key stats read that they'vehad like 500 million plus downloads overall and 20 million plus daily active users

Nowthat is massive daily

If you're having like 20 million users plus then you have achievedan amazing thing

So that's how good this game is

Okay, and then this game was actuallyinitially launched only in North America Australia New Zealand, and I'm aware of this fact becauseI'm based out of India and I did not get access to this game because the moment news got outthat we have a game like this

I started downloading it, but I couldn't really find any link orI couldn't download it at all

So they launched it only in these countries, but what theyfaced right in spite of just reading it in these three countries

They had like a majorproblem and that problem is what I'm going to talk about in the next slide, right? Somy use case is based on that very fact that In spite of launching it only in these threecountries or in probably North America and then in Australia New Zealand, they couldhave had a meltdown but rather with the help of Humanity's they used that same problemas the basis for their raw success

So that's what happened

Now let that be a suspenseand before I get to that let me just finish this slide one amazing thing about Pokemongo is that it has inspired users to walk over 5

4 billion miles an hour


Yes do themath five point four billion miles in one year

That's again a very big number and itsays that it has surpassed engineering Expectations by 50 times

Now this last sign is not withrespect to the Pokemon Go the game but it is with respect to the backend and the useof Kubernetes to achieve whatever was needed

Okay, so I think I've spent enough time overhere

Let me go ahead and talk about the most interesting part and tell you how the backin architecture of Pokemon go was okay

So you have a Pokémon go container, which hadtwo primary components one is your Google big table, which is your main

Database whereeverything is going in and coming out and then you have your programs which is a runon your java Cloud, right? So these two things are what is running your game mapreduce andCloud dataflow wear something it was used for scaling up

Okay, so it's not just thecontainer scaling up but it's with respect to the application how the program would reactwhen there are these increased number of users and how to handle increased number of requests

So that's where the mapper uses

The Paradigm comes in right the mapping and then reducingthat whole concept

So this was their one deployment

Okay, and when we say in defy,it means that they had this over capacities which could go up til five times


Sotechnically they could only serve X number of requests but in case of failure conditionsor heavy traffic load conditions, the max the server could handle was 5x because after5x the server would start crashing that was their prediction

Okay, and what actuallyhappened at Pokemon go on releasing in just those three different geographies

Is thatthe Deployed it the usage became so much that it was not XM R of X, which is technicallythey're a failure limit and it is not even 5 x which is the server's capability but thetraffic that they got was up to 50 times 50 times more than what they expected

So, youknow that when your traffic is so much then you're going to be brought down to your knees

That's a definite and that's a given right

This is like a success story and this is toogood to be true kind of a story and in that kind of a scenario if the request start comingin are so much that if they reach 50 x then it's gone, right the application is gone fora toss

So that's where kubernetes comes in and they overcome all the challenges

Howdid you overcome the challenges because Cuban areas can do both vertical scaling and horizontalscaling at ease and that is the biggest problem right? Because any application and any othercompany can easily do horizontal scaling where you just spin up more containers and moreinstances and you set up the environment but vertical scaling is something which is veryspecific and this is even more challenging

Now it's more specific to this particulargame because the virtual reality would keep changing whenever a person moves around orwalks around somewhere in his apartments or somewhere on the road

Then the ram rightthat would have to increase the memory the in memory and the storage memory all thiswould increase so in real time your servers capacity also has to increase vertically

So once they have deployed it, it's not just about horizontal scalability anymore

It'snot about satisfying more requests

It's about satisfying that same request with respectto having more Hardware space more RAM space and all these things right that one particularserver should have more performance abilities

That's what it's about and communities solveboth of these problems effortlessly and neon tape were also surprised that kubernetes coulddo it and that was because of the help that they got from Google

I read an article recentlythat they had a neon thick slab

He met with some of the top Executives in Google and thengcp right and then they figure out how things are supposed to go and they of course Metwith the Hedgehog communities and they figure out a way to actually scale it up to 50 timein a very short time

So that is the challenge that they represented and thanks to communities

They could handle three times the traffic that they expected which is like a very oneof story and which is very very surprising that you know, something like this would happen

So that is about the use case and that pretty much brings an end to this topic of how Pokemongo used communities to achieve something because in today's world Pokemon go is a really reveredgame because of what it could write it basically beat all the stereotypes of a game and whateveranybody could have anything negative against the game, right? So they could say that thesemobile games and video games make you lazy

They make you just sit in one place and allthese things

Right and Pokemon go was something which was different it actually made peoplewalk around and it made people exercise and that goes on to show how popular this gamebecame if humanity is lies at the heart of something which became so popular and somethingNow became so big then you should imagine how big the humanities or how beautiful communitiesis, right? So that is about this topic now moving forward

Let me just quickly talk aboutthe architecture of communities


So the communities architecture is very simple

We have the cube Master which controls a pretty much everything

We should note that it isnot a Docker swarm where your Cube Master will also have containers running

Okay, sothey won't be containers over here

So all the containers will be running all the serviceswhich will be running will be only on your nodes

It's not going to be on your masterand you would have to first of all create your rock Master

That's the first step increating your cluster and then you would have to get your notes to join your cluster


So bead your pods or beat your containers everything would be running on your nodesand your master would only be scheduling or replicating these containers across all thesenodes and making sure that your configurations are satisfied, right? Whatever you specifyin the beginning and the way you access your Cube Master is why are two ways You can eitheruse it via the UI or where the CLI

So the CLI is the default way and this is the mainway technically because if you want to start setting up your cluster you use the CLI, youset up your cluster and from here, you can enable the dashboard and when you enable thedashboard then you can probably get the GUI and then you can start using your communitiesand start deploying by just with the help of the dashboard right my just the click functionality

You can deploy an application which you want rather than having to write

I am L file orfeed commands one after the other from the CLI

So that is the working of Kubernetes


Now, let's concentrate a little more on how things work on the load end

Now assaid before communities Master controls your nodes and inside nodes you have containers

Okay, and now these containers are not just contained inside them but they are actuallycontained inside pods

Okay, so you have nodes inside which there are pots and inside eachof these pods

They will be a number of containers depending upon Your configuration and yourrequirement right now these pods which contain a number of containers are a logical bindingor logical grouping of these containers supposing you have one application X which is runningin Node 1


So you will have a part for this particular application and all the containerswhich are needed to execute this particular application will be a part of this particularpart, right? So that's how God works and that's what the difference is with respect to whatDoc is warm and two bananas because I'm dr


You will not have a pot

You just havecontinuous running on your node and the other two terminologies which you should know isthat of replication controller and service

Your replication controller is the Mastersresource to ensuring that the request number of pods are always running on the nodes, right?So that's trigger confirmation or an affirmation which says that okay

This many number ofPODS will always be running and these many number of containers will always be runningsomething like that

Right? So you see it and the replication controller will alwaysensure that's happening and your service is just an object on the master that providesload

I don't think of course is replicated group of PODS

Right? So that's how Humanitiesworks and I think this is good enough introduction for you

And I think now I can go to the demopart where and I will show you how to deploy applications on your communities by eitheryour CLI, or either via your Jama files or by or dashboard

Okay guys, so let's get startedand for the demo purpose

I have two VMS with me


So as you can see, this is my CubeMaster which would be acting as my master in my cluster

And then I have another VMwhich is my Cube Node 1


So it's a cluster with one master and one node

All right

Nowfor the ease of purpose for this video, I have compiled the list of commands in thistext document right? So here I have all the commands which are needed to start your clusteron then the other configurations and all those things

So I'll be using these every copyingthese commands and then I'll show you side-by-side and I will also explain when I do that asto what each of these commands mean now there's one prerequisite that needs to be satisfied

And that is the master of should have at least two core CPUs

Okay and 4GB of RAM and yournode should have at least one course if you and 4GB of ram so just make sure that thismuch of Hardware is given to your VMS right if you are using To what a Linux operatingsystem well and good but if you are using a VM on top of a Windows OS then I would requestyou to satisfy these things

Okay, these two criterias and I think we can straight awaystart

Let me open up my terminal first fault


This is my node

I'm going back to mymaster



So first of all, if you have to start your cluster, you have to startit from your Masters end

Okay, and the command for that is Q barium in it, you specify theport Network flag and the API server flag

We are specifying the port Network flag becausethe different containers inside your pod should be able to talk to each other easily

Right?So that was the whole concept of self discovery, which I spoke about earlier during the featuresof communities

So for this self-discovery, we have like different poor networks usingwhich the containers would talk to each other and if you go to the documentation the communityis documentation

You can find a lot of options are you can use either Calico pod or you canuse a flannel poor Network

So when we say poor Network, it's basically a framed as thecni

Okay container network interface

Okay, so you can use either a Calico cni or a flannelcni or any of the other ones

This is the two popular ones and I will be using the calciocni


So this is the network range for this particular pod, and this will Specifyover here

Okay, and then over here we have to specify the IP address of the master

Solet me first of all copy this entire line

And before I paste it here, let me do an ifconfig and find out what is the IP address of this particular machine of my master machine

The IP address is one ninety two dot one sixty eight dot 56


Not one


So let's justkeep that in mind and let me paste the command over here in place of the master IP address

I'm going to specify the IP address of the master

Okay, but I just read out

It is one

Ninety two dot one sixty eight dot 56

1 not one and the Pod Network

I told you that I'mgoing to use the Calico pod

So let's copy this network range and paste it here

So allmy containers inside this particular pot would be assigned an IP address in this range


Now, let me just go ahead and hit enter and then your cluster would begin to set up

Soit's going X expected

So it's going to take a few minutes

So just to hold on there


My Cuban its master has initialized successfully and if you want to start usingyour cluster, you have to run the following as a regular user

Right so we have threecommands which is suggested by kubernetes itself

And that is actually the same setof commands or even I have here

Okay, so I'll be running the same commands

This isto set up the environment

And then after that we have this token generated, right thejoining token

So the token along with the inlet address of the IP of the master if Ibasically execute this command in my nodes, then I will be joining this cluster wherethis is the master, right? So this is my master machine

This is created the cluster

So nowbefore I do this though, there are a few steps in the middle

One of those steps is executingall these three commands and after that comes bring up the dashboard and setting up theboard Network right - the calcio apart

So I have to set up the Calico pod and then afteralso set up the dashboard because if I do not start the And this before the nodes thenthe node cannot join and I will have very severe complications

So let me first of allgo ahead and run these three commands one of the other

Okay, since I have the samecommands in my text doc

I'll just copy it from there

Okay, say ctrl-c paste enter

Okay, and I'll copy this line

So remember you have to execute all these things as regularuser

Okay, you can probably use your pseudo

But yeah, you'll be executing it as your regularuser and it's asking me if I want to overwrite the existing whatever is there in this directory,I would say yes because I've already done this before but if you are setting up thecluster for the first time, you will not have this prompt


Now, let me go to the thirdline copy this and paste it here

Okay, perfect

Now I've ran these three commands as I wastold by communities

Now, the next thing that I have to do is before I check the node statusand all these things

Let me just set up the network

Okay, the poor Network

So like Isaid, this is the Line This is the command that we have to run to set up the Calico Network

Okay to all of the notes to join our particular Network

So it will be copying the templateof this Calico document file is present over here in this box


So hit enter and yes,my thing is created

Calcio Cube controllers created now, I'll just go back here and seeat this point of time

I can check if my Master's connected to the particular pod

Okay, soI can run the cube CDL get loads command Okay

This would say that I have one particularresource connected to the cluster

Okay name of the machine and this role is master andyet the state is ready

Okay, if you want to get an idea of all the different pods whichare running by default then you can do the cubes

He'll get pods along with few options

Okay should specify these flags and they are

All namespaces and with the flag O specifywide


So this way I get all the pods which are started by default


So thereare different services like at CD4 Cube controllers for the Calico node for the SED Master forevery single service

There's a separate container and pot started

Okay, so that's what youcan understand from this part

Okay, that is the safe assumption

Now that we know thecluster the cluster is ready and the Masters part of a cluster

Let's go ahead and executethis dashboard


Remember if you want to use a dashboard then you have to run thiscommand before your notes join this particular cluster because the moment your notes joininto the cluster bring up the dashboard is going to be challenging and it will startthrowing arrows

OK it will say that it's being hosted on the Node which we do not wantwe want the dashboard to be on the server itself right on the master

So first, let'sbring the dashboard up

So I'm going to copy this and paste it here

Okay, Enter great

Communities dashboard is created

Now the next command that you have to get your dashboardup and running is Cube cereal proxy

Okay with this we get a message saying that it'sbeing served at this particular port number and yes, you are right now there you can ifyou access Local Host

What was the port number again? Localhost? Yeah one 27

0 or 0

1 islocalhost

Okay followed by port number eight thousand one, okay

Yeah, so right now weare not having the dashboard because it is a technically accessed on another URL

Butbefore we do that, there are various other things that we have to access

I mean we haveto set okay, because right now we have only enabled the dashboard now if you want to accessthe dashboard you have to first of all create a service account


The instructionsare here

Okay, you have to first of all create a service account for dashboard

Then youhave to say that okay, you are going to be the admin user of this particular serviceaccount and we have to enable that functionality here

You should say dashboard admin privilegesand you should do the cluster binding

Okay, the cluster roll binding is what you haveto do and after that to join to that poor to get access to that particular dashboard

We have to basically give a key


It's like a password

So we have to generate thattoken first and then we can access the dashboard

So again for the dashboard there are thesethree commands

Well, you can get confused down the line

But remember this is separatefrom the above


So what we did initially is rant these three commands which kubernetes

Oh To execute and after that the next necessity was bring up a pod

So this was that commandfor the Pod and then this was the command for getting the dashboard up and right afterthat run the proxy and then on that particular port number will start being served

So mydad would is being served but I'm not getting the UI here and if I want to get the you--ifyou create the service account and do these three things, right? So let's start with thisand then continue

I hope this wasn't confusing guys

Okay, I can't do it here

So let meopen a new terminal

Okay here I'm going to paste it

And yes service account created

Let me go back here and execute this command when I'm doing the role binding I'm sayingthat my dashboard will should have admin functionalities and that's going to be the cluster roll

Okaycluster admin, and then the service account is what I'm using and it's going to be indefault namespace


So when I created the account I said that I want to create thisparticular account in default namespace

So the same thing I'm specifying here

Okay - goodadmin created good

So let's generate the That is needed to access my dashboard

Okaybefore I execute this command, let me show you that once so if you go to this URL, right/ API slash V 1 / namespaces

Yep, let me show to you here


So this is the particularURL where you will get access to the dashboard

Okay login access to the dashboard localhost8001 API V1 namespaces / Cube system / Services slash HTTP Cuban eighties

Dashboard: / proxy


Remember this one that is the same thing over here and like I told you it's askingme for my password

So I would say token but let me go here and hit the command and generatethe token

So this is the token amount of copy this from here till here going to saycopy and this is what I have to paste over here

All right

So Simon update

Yes, perfectwith this is my dashboard, right? This is my Cuban eighties dashboard

And this is howit looks like whatever I want

I can get an overview of everything

So that is workloads

If I come down there is deployments

I have option to see the pods and then I can seewhat are the different Services running among most of the other functionalities


Soright now we don't have any bar graph or pie graph shown you which clusters up which boardis up and all because I have not added any node and there is no service out as runningright

So I mean, this is the outlay of the dashboard

Okay, you will get access to everythingyou want from the left

You can drill down into each of these namespaces or pods on containersright now

If you want to deploy something through the dashboard right through the clickfunctionality, then you can go here

Okay, but before I create any container or beforeI create any pot or any deployment for that matter of fact, I have to have nodes becausethese will be running only on nodes

Correct, whatever

I deploy they have done only onnode

So let me first open up my node and get the node to join this particular clusterof mine

Now, if you remember the command to join the node got generated at the masterand correct

So, let me go and fetch that again

So that was the first command thatwe ran right this one

So, let's just copy this

And paste this one at my node end

Thisis the IP of my master and it will just join at this particular port number

Let me hitenter

Let's see what happens

Okay, let me run it as root user

Okay? Okay, perfect successfullyestablished connection with the API server and it says this node has joined the clusterRight Bingo

So this is good news to me

Now if I go back to my master and in fact, ifI open up the dashboard there would be an option of nodes

Right? So initially now,it's showing this master Masters

The only thing that is part of my nodes, let me justrefresh it and you would see that even node - 1 would be a part of it

Right? So thereare two resources to instances one is the master itself and the other is the node nowif I go to overview, you will get more details if I start my application if I start my serversor containers then all those would start showing up your right

So it's high time

I startshowing you how to deploy it to deployed using the dashboard

I told you this is the functionality

So let's go ahead and click on this create

And yeah mind you from the dashboard is theeasiest way to deploy your application, right? So even developers around the world do thesame thing for the first time probably they created using the Amal file

And then fromthere on they start editing the ml file on top of the dashboard itself or the createor deploy the application from here itself

So we'll do the same thing

Go to create anapp using functionality click functionality

You can do it over here

So let's give a nameto your application

I'll just say it you recur demo

Okay, let that be the name ofmy application and I want to basically pull an engines image


I want to launch anengine service

So I'm going to specify the image name in my Docker Hub


So it sayseither the URL of a Public Image or any registry or a private image hosted on Docker Hub orGoogle container registry

So I don't have to specify the URL per se but if you are specifyinga Docker Hub, if you are specifying this image to be pulled from Docker Hub, then you canjust use the name of the image which has to be pulled

That's good enough

Right engineto the name and that's good enough and I can choose to set my number of ports to one ortwo in that way

I will have two containers running in the pot

Right? So this is doneand the final part is actually without the final part

I can strip it deployed

Okay,but if I deployed then my application would be created but I would just don't get theUI

I mean, I won't see the engine service so that I get the service

I have to enableone more functionality here

Okay, the server's here click on the drop down and you will haveexternal option right? So click on external this would let you access this particularservice from your host machine, right? So that is the definition so you can see theexplanation here and internal or external service can be defined to map and incomingport to a Target Port seen by the container so engines which would be hosted on one ofthe container ports

That could not be accessible if I don't specify anything here, but nowthat I've said access it externally on a particular port number then it will get mapped for meby default

And jinkx runs on port number 80

So the target put would be the same butthe port I want to expose it to that

I can map into anything I want so I'm going to say82

All right, so that's it

It's as simple as this this way

Your application is launchedwith two pods, so I can just go down and click on deploy and this way my application shouldbe deployed

My deployment is successful

There are two pods running

So what I cando is I can go to the service and try to access the UI, right? So it says that it's runningon this particular port number 82153

So copy this and say localhost 321530 k hit enterbingo

So it says welcome to Jenkins and I'm building the UI, right? So I'm able to accessmy application which I just launched through the dashboard

It was as simple as that

Sothis is one way of for launching or making a deployment

There are two other ways

LikeI told you one is using your CLI itself your command line interface of your draw Linuxmachine, which is the terminal or you can do it by uploading the yamen file

You cando it by uploading the yamen file because everything here is in the form of Yama LordJason

Okay, that's like the default way

So whatever deployment I made right that alsothose configurations are stored in the form of Yaman

So if I click on view or edit yeonggil,all the configurations are specified the default ones have been taken

So I said the name shouldbe a director demo that is what has been

Oh you're that is the name of my deployment?Okay

So kind is deployment the version of my API

It's this one extension /we 1 beta1 and then other metadata I have various other lists

So if you know how to write a normalfile then I think it would be a little more easier for you to understand and create yourdeployment because you will file is everything about lists and maps and these are all filesare always lists about maps and maps about lists

So it might be a little confusing

So probably will have another tutorial video on how to write a normal file for Cuban itsdeployment but I would keep that for another session


Let me get back to this sessionand show you the next deployment

Okay, the next deployment technique, so let me justclose this and go back to overview

Okay, so I have this one deployment very good


So let's go to this


So what I'll do is let me delete this deployment

Okay ourlet me at least scale it down because Don't want too many resources to be used on my nodealso because I will have to show two more deployments

Right so I have reduced my deploymentover here

And I think it's be good enough


So let's go back to the cube set upthis document of mine

So this is where we're at

Right we could check our deployments wecould do all these things

So one thing which I might have forgotten is showing the nodeswhich are part of the cluster of right

So this is my master

Yeah, so I kind of forgotto show you this Cube CDL get node

So the same view that you got on your dashboard youget it here

Also, I mean, these are the two nodes and this is the name and all these things

Okay, and I can also do the cube CDL get pods which would tell me all the pods that arerunning under a car

Demo is the pot which I have started


This is my God

Nowif I specify with the other flags right with all namespaces and with wide then all thedefault pause which get created along with your kubernetes cluster

Those will also getdisplayed

Let me show you that also just in case Okay


So this is the one whichI created and the other ones are the default of deployments that come with few minutesthe moment you install set up the cluster these get started

Okay, and if you can seehere this particular that this particular a dareka demo, which I started is runningon my Node 1 along with this Cube proxy and this particular Calico node

So Easter servicesare running on master and node

And this one is running only on my Node 1 right you cansee this right the Calico node runs both on my node over here and on my master and similarlythe queue proxy runs on my node here and on my master

So this is the one that's runningonly on my Note

Okay, so getting back to what I was about to explain you

The nextpart is how to deploy anything through your terminal now to deploy your same engines applicationthrough your CLI

We can follow these set of commands Okay, so there are a couple ofsteps here

First of all to create a deployment

We have to run this command

OK Cube cerealcreate deployment and drinks and then the name of the image that you want to create

This is going to be the name of your deployment

And this is the name of the image which youwant to use so control C and let me go to the terminal here on my master

I'm executingthis command Cube cereal create a deployment


So the deployment engines is createdif you want we can verify that also over here so under deployments right now, we have oneentry in the array card Mo and yes now you can see there are two engines and arica demo

So this is pending

I mean, it would take a few seconds

So in the meanwhile let thiscontinue with the other steps

Once you have created a deployments, you have to createthe service

Okay after say which is the node Port which can be used to access that Particularservice, right because deployment of just a deployment you're just deploying your containerif you want to access it

Like I told you earlier from your local from your host machineall those things

Then you have to enable the node board

If you want to get your deploymentson your terminal you can run this command Cube CDL get deployments

Okay engines alsocomes up over here, right? If you want more details about your diploma

You can use thiscommand Cube CDL describe you get like more details about this particular developmentas to what is the name? What is the port number? It's sort of siding on all these things


Let's not complicate this you can probably use that for understanding later

So oncethat is done, the next thing that you have to do is you have to create the service onthe nodes you have created the deployment, but yes create the service on the nodes usingthis particular command Cube cereal

Create service and say note Port

Okay, this meansyou want to access it at this particular Point number you're doing the port mapping 80 is280

Okay, container Port 80 to the internal node, Port 80


So service for enginesis created

And if you want to check which of the diplomas are running in which nodesyou can run the command Cube City L


Okay, this would tell you okay, you have twodifferent services at a record Mo and engines and they are anyone these port numbers andon these nodes, right? So communities is the one which God created automatically entera car

Demo is the one which I created

Okay engines is again, the one which I createdcommunities comes up on its own just specifying to you because this is a container for thecluster itself


So let's just go back here and then yes and similarly if you wantto delete a deployment then you can just use this command Cube CDL delete deployment followedby the name of the deployment, right? It's pretty simple

You can do it this way

Otherwisefrom the dashboard

You can delete it like how I showed you all your click over hereand then you can click on delete and then if you want to scale you can scale it

Soboth of these deployment of mine have one porridge, right? So let's do one thing

Solet's just go to the engines service

And here let's try accessing this particular service

Local Host

Okay, perfect here

Also it says welcome to engines right

So with this youcan understand that the port mapping worked and by going to service you will get to knowon which port number you can access it on your host machine, right? So this is the internalcontainer Port map to this particular Port of mine


Now if one if not for thisif this doesn't work, you can also use the cluster IP for the same thing trust ripe isgoing to basically the IP using which all your containers access each other, right?So if your body will have an IP

So whatever is running in their containers that will againbe accessible on your cluster I be so so it's the same thing right? So let me just closethese pages and that's how you deploy an application through your CLI

So this comes to our lastpart of this video, which is nothing but deployment via Yaman file

So for again deployment whereI am and file you have to write your yawm Al code, right? You have to either write youryawm Al code or your Json code, correct? So this the code which I have written

Just inJama format

And in fact, I already have it in my machine here

So how about I just doan LS? Yeah, there is deployment at Dotty

Alright, so let me show you that so this ismy yamen file


So here I specify various configurations similar to how I did it usingthe GUI or Rider reducing the CLI it something similar gesture

I specify everything andone particular file here

If you can see that

I have a specify the API version

Okay, soI'm using extensions dot a slash b 1 or beta 1


I can do this or I can just simplyspecify version 1 I can do either of those and then the next important line is the kindso kind is important because you have to specify what kind of file it is

Is it a deploymentfile or is it for a pod deployment or is it for your container deployment or is it theoverall deployment? What is it? So I've said deployment okay, because I want to deploythe containers also along with the pot

So I'm saying deployment in case you want todeploy only the pod which you realistically don't need to


Why would it just deployup? But in case if you want to deploy a pot then you can go ahead and write Port hereand then just specify what are the different containers

Okay, but in my case, it's a completedeployment right with the pods and the services and the containers

So I will go ahead andwrite other things and under the metadata

I will specify the name of my application

I can specify what I want

I can put my name also over here like Warden, okay, and I cansave this and then the important part is this back part

So here is where you set the numberof replicas

Do you remember I told you that there's something called has replication controllerwhich controls the number of ports that you will be running

So it is that line

So ifI have a set to over here, it means that I will have two pods running of this particularapplication of Verdun

Okay, what exactly am I doing here under spec AB saying thatI want to Containers so I have intended or container line over here and then I have twocontainers inside

So the first container which I want to create is of the name frontend

Okay, and I'm using an engines image and similarly

The port number that this wouldbe active on is container Port 80

All right, and then I'm saying that I want a second containerand the container for this could I could rename this to anything? I can say back end and Ican choose which image I want

I can probably choose a httpd image also

Okay, and I canagain say the port's that this will be running on I can say the container Port that it shouldrun on is put number is 88 right? So that's how simple it is

All right

And since it'syour first video tutorial the important takeaways from this yawm Al file configuration is thatunder specular have to specify the containers? And yes everything in Json format with allthe Intel dacians and all these things

Okay, even if you have an extra space anywhere overhere, then you are real file would throw an invalid error

So make sure that is not there

Make sure you specify the containers appropriately if it's going to be just one container

Welland good it's two containers

Make sure you intend it in the right way and then you canspecify the number of PODS

You want to give a name to your deployment and Mainly establishedread these rules


So once you're done with this just save it and close the yamenfile


So this is your deployment djamel

Now, you can straight away upload this tablefile to your Kubernetes

Okay, and that way your application would be straight with deployed


Now the command for that is Cube cereal create - F and the name of the file


So let me copy this and then the name of my file is deployment or djamel

So let me hitenter


So my deployment the third deployment vardhan is also created right sowe can check our deployments from the earlier command

That is nothing but Cube CDL getdeployments


It's not get deployment audiometer


It's get deployments

Andas you can see here, there is an Adder a guard Mo there is engines and there is Verdun andthe funny thing which you should have noticed is that I said, I want to replicas right topods

So that's why the desire is to currently we have to up to date is one

So okay updateis to brilliant available is 0 because let's just give it a few seconds in 23 seconds

I don't think the board would have started

So let's go back to our dashboard and verifyif there's a third deployment that comes up over here

Okay, perfect

So that's how it'sgoing to work

Okay, so probably is going to take some more time because the containersjust restarting

So let's just give it some more time

This could well be because of thefact that my node has very less resource, right? So I have too many deployments thatcould be the very reason

So what I can do is I could go ahead and delete other deploymentsso that my node can handle these many containers and pods right? So let me delete this particulardeployment and Rings deployment and let me also delete this Adder a car demo deploymentof mine


Now let's refresh and just wait for this to happen


So what I cando instead is I could have a very simple deployment right? So let me go back to my terminal andlet me delete my deployment

Okay, and let me redeployed again, so Cube CDL delete deployment

Okay, so what then this deployment has been deleted? Okay

So let's just clear the screenand let's do G edit of the yamen file again and here let's make things simpler

Let mejust delete this container from here

Let me save this right and close this now

Letme create a deployment with this


So what then is created, let me go up here andrefresh

Let's see what happens


So this time it's all green because it's allhealthy

My nodes are successful or at least it's going to be successful container creating


So two parts of mine are up and running and both my paws are running right and bothare running on Node 1 pause to or of to those are the two deployments and replica set andthen Services, right? So it's engines which is the basement which is being used

So welland good

This is also working

So guys

Yeah, that's about it


So when I try to uploadit, maybe there was some other error probably in the arm will file they could developmentsfrom small mistake or it could have been because my known had too many containers running thosecould have been the reasons

But anyways, this is how you deployed through your yamenfile

All right, so that kind of brings us to the end of this session where I've showedyou a demonstration of deploying your containers in three different ways CLI dashboard andyour yamen files

Hey everyone, this is Reyshma from Edureka

And today we'll be learningwhat is ansible

First,let us look at the topics that we'll be learning today

Well,it's quite a long list

It means we'll be learning a lot of things today

Let us takea look at them one by one

So first we'll see the problems that were before configurationmanagement and how configuration management help to solve

It will see what ansible isand the different features of ansible after that

We'll see how NASA is implemented andcivil to solve all their problems

After that

We'll see how we can use ansible for orchestrationprovisioning configuration management application deployment and security

And in the end, we'llwrite some ansible playbooks to install lamp stack on my node machine and host your websitein my note machine

Now before I tell you about the problems, let us first understandwhat configuration management actually is

Well configuration management is actuallythe management of your software on top of your Hardware

What it does is that it maintainsthe consistency of your product based on its requirements its design and its physical andfunctional attributes

Now, how does it maintain the consistency it is because the configurationmanagement is applied over the entire life cycle of your system

And hence

It providesyou with a very good visibility and control when I say visibility

It means that you cancontinuously check and monitor the performances of all your assistants

So if at any timethe performance of any of his system is degrading the configuration management system will notifyyou and hence

You can prevent errors before it actually occurs and by control, I meanthat you have the power to change anything

So if any of your servers failed you can reconfigureit again to repair it so that it is up and running again, or you can even replace theserver if needed and also the configuration management system holds the entire historicaldata of your infrastructure it DOC

Men's all the snapshots of every version of yourinfrastructure

So overall the configuration management process facilitates the orderlymanagement of your system information and system changes so that it can use it for beneficialpurposes

So let us proceed to the next topic and see the problems before configurationmanagement and how configuration management solved it and with that you'll understandmore about configuration management as well

So, let's see now, why do we need configurationmanagement now, the necessaries behind configuration management was dependent upon a certain numberof factors and certain number of reasons

So let us take a look at them one by one

So the first problem was managing multiple servers now earlier every system was managedby hand and by that, I mean that you have to login to them via SSH make changes andthen log off again

Now imagine if a system administrator would have to make changes inmultiple number of servers

You'll have to do this task of logging in making changesand longing of again and again repeatedly, so this would take up a lot of time and thereis no time left for the system administrators to monitor the performances of the systemcontinuously safe at any time any of the servers would fail it took a lot of time to even detectthe faulty server and to even more time to repair it because the configuration scriptsthat they wrote was very complex and it was very hard to make changes on to them

So afterconfiguration management system came into the picture what it did is that it dividedall the systems in my infrastructure according to their dedicated tasks their design or architectureand the organize my system in an efficient way

Like I've proved my web servers togethermy database servers together application servers together and this process is known as baselining


Let's for an example say that I wanted to install lamp stack in my system and lampstack is a software bundle where L stands for Linux a for Apache and for MySQL and Pfor PHP

So I need this different software's for different purposes

Like I need Apacheserver to host my web pages and it PHP for my web development

I need Linux as my operatingsystem and MySQL as my data definition language or data manipulation language since now allthe systems in my infrastructure is Baseline

I would know exactly where to install eachof the software's

For example, I'll use Apache as my web server here for database

I willinstall the MySQL here and also begin easy for me to monitor my entire system

For example,if my web pages are not running I would know that there's something wrong

With my webservers, so I'll go check in here

I don't have to check the database servers and applicationservers for that


If I'm not able to insert data or extract data from my database

I would know that something is wrong with my database servers

I don't need to checkthese too for that matter

So what configuration management system did with baselining is thatit organized mess system in an efficient way so that I can manage and monitor all my serversefficiently

Now, let us see the second problem that we had which were scaling up and scalingdown

See nowadays, you can come up with requirements at any time and you might have to scale upor scale down your systems on the Fly and this is something that you cannot always planahead and scaling up

Your infrastructure doesn't always mean that you just buy newhardware and just place them anywhere


You cannot do that

You also need to provisionand configure this new machines properly

So with configuration management system, I'vealready got my infrastructure baselined so I know exactly how this new machines are goingto work according to their dedicated task and where should I actually place them andthe scripts that configuration management uses are reusable so you can use the samescripts that you use to configure your older machines to configure your new machines aswell

So let me explain it to you with an example

So let me explain it to you withan example

Let's say that if you're working in an e-commerce website and you decide tohold a mega sale

New Year Christmas sale or anything? So it's obvious that there isgoing to be a huge rise in the traffic

So you might need more web servers to handlethat amount of requests and you might even need a load balancers or maybe to to distributethat amount of traffic onto your web servers and these changes however need to be madeat a very short span of time

So after you've got the necessary Hardware, you also needto provision them accordingly and with configuration management, you can easily provision thisnew machines using either recipes or play books or any kind of script that configurationmanagement uses

And also after the sale is over you don't need that many web serversor a load balancer so you can disable them using the same easy scripts as well and alsoscaling down is very important when you are using cloud services when you do not needany of those machines, it's no point in keeping them

So you have to scale down as well becauseyou have to reconfigure your entire infrastructure as well and with configuration management

It is a very easy

Anything to Auto scale up and scale down your infrastructure

SoI think you all have understood this problem and how configuration management salted solet us take a look at the third problem

Third problem was the work velocity of the developerswere affected because the system administrators were taking time to configure the server'safter the developers have written a code

The next job is to deploy them on differentservers like test servers and production servers for testing it out and releasing it but thenagain every server was managed by hand before so the system administrators would again haveto do the same thing log in to its server configure them properly by making changesand do the same thing again to all servers

So this was taking a lot of time now beforedevops game you the picture there was already agility in the developers end for which theywere able to release new software's very frequently, but it was taking a lot of time for the systemadministrators to configure the servers for testing so the developers would have Waitfor all the test results and this highly hamper the word velocity of the developers

But afterthere was configuration management the system administrator had got access to a configurationmanagement tool which allowed them to configure all the servers at one go

All they had todo is write down all the configurations and write down the list of all the software'sthat there need to provision this servers and deploy it on all of the servers at onego

So now agility even came into the system administrators and as well

So now after configurationmanagement the developers and the system administrators were finally able to work in the same base

Now, this is how configuration management solve the third problem now, let us take alook at the last problem

Now the last problem was rolling back in today's scenario

Everyonewants a change and you need to keep making changes frequently because customers willstart losing interest if things stay the same so you need to keep releasing new featuresto upgrade your application even giants like Amazon and Facebook

They do it now and thenand still they're unsure if the users are going to like it or not

Now imagine if theusers did not like it they would have to roll back to the previous version again, so, let'ssee how it creates a problem

Now before there was configuration management

Let's say you'vegot the old version which is the version one when you're upgrading it you're changing allthe configurations in the production server

You're deleting the old configurations completelyand deploying the new version now if the users did not like it you would have to reconfigureThis Server again with the old configurations and that will take up a lot of time

So applicationis going to be Down for that amount of time that you need for reconfiguring the serverand this might create a problem

But when you're using configuration management system,as you know that it documents every version of your infrastructure when you're upgradingit with configuration management, it will remove the configurations of the older version,but it will be well documented

It will be kept there and then the newer version is deployed

Now if the users did not like it this time, the older of the configuration version wasalready documented

So all you have to do is just switch back to the old version andthis won't take up any time and you can upgrade or roll back your application in zero downtimezero downtime means that your application would be down for zero time

It means thatthe users will not notice that your application went down and you can achieve it seamlesslyand this is how configuration management system solved all the problems that was before


I hope that if all understood how Management did that let us now move on to the next topic?Now the question is how do I incorporate configuration Management in my system? Well, you do thatusing configuration management tools

So let's take a look at all the available configurationmanagement tools

So here I've got the four most popular tools that is available in themarket right now

I've got ansible and Saul stack which are push-based configuration managementtool by push-based

I mean that you can directly push all those configurations on to your nodemachines directly while chef and puppet are both pull based configuration management tools

It means that they rely on a central server for configurations the pull all the configurationsfrom a central server

There are other configuration management tools available in the market tobut but these four are the most popular ones

So now let's know more about ansible now ansibleis a configuration management tool that can be used for provisioning orchestration applicationdeployment Automation and it's a push based configuration management tool

Like I toldyou what it does is that it automate your entire it infrastructure and gives you largeproductivity gains and it can automate pretty much anything

It can automate your Cloudyour networks your servers and all your it processes

So let us move on to the next topic

So now let us see the features of ansible

The first feature is that it's very simple

It's simple to install and setup and it's very easy to learn because ansible Play booksare written in a very simple data serialization language, which is known as Gamal and it'spretty much like English

So anyone can understand that and it's very easy to learn next featurebecause of which ansible is preferred over other configuration management tools is becauseit's Agent kallus it means that you do not need any kind of Agents or any kind of plansoftware's to manage your node machines

All you have to do is install ansible in yourcontrol machine and just make an SSH connection with your nodes and start pushing configurationsright away

The next feature is that it's very powerful, even though you call ansiblesimple and it does not require any agent

It has the capabilities to model very complexit workflows and it comes with a very interesting feature, which is called the batteries included

It means that you've got everything that you already need and in ansible it's because itcomes with more than 750 inbuilt modules, which you can use them for any purpose inyour project

And it's very efficient because all the modules that ansible comes with theyare extensible

It means that you can customize them according to your needs and for doingthat you do not need to use the same programming language that it was originally written inyou can choose any kind of programming language that you're comfortable with and then customizethose modules for your own use

So this is the power and Liberty that ansible gives younow, let us take a look at the case study of NASA

What were the problems that NASAwas facing and how ansible solved all those problems? Now NASA is an organization thathas been sending men to the Moon

They are carrying out missions and Mars and they'relaunching satellites now and then to monitor the Earth and not just the Earth

They'reeven monitoring other galaxies and other planets as well

So you can imagine the kind and theamount of data that NASA might be dealing with but all the applications were in a traditionalHardware based Data Center and they wanted to move into a cloud-based environment becausethey wanted better agility and they wanted better adaptive planning for that

And alsothey wanted to save costs because a lot of money was spent on just the maintenance ofthe hardware and also they wanted more security because NASA is a government organizationof the United States of America and obviously, they wanted more security because NASA isa government organization of the United States of America and the hold a lot of confidentialdetails as well for the government

So they just Cannot always rely on the hardware tostore all This Confidential files, they needed more security because if at any time the hardwarefails, they cannot afford to lose that data and that is why they wanted to move all their65 applications from a hardware environment to a cloud-based environment

Now, let ustake a look

What was the problem now for this migration of all the data into a cloudenvironment

They contacted a company called in Frozen now in Frozen is a company who isa cloud broker and integrator to implement solutions to meet needs with security

Soin phase and was responsible for making this transition and NASA wanted to make this transitionin a very short span of time

So all the applications were migrated as it is into the cloud environmentand because of this all the AWS accounts and all the virtual private clouds that was previouslydefined they all got accumulated in a single data space and this It up a huge chunk ofdata and NASA had no way of centrally managing it and even simple tasks like giving a particularsystem administrator access rights to a particular account

This became a very tedious job withNASA wanted to automate and to and deployment of all their apps and for that they neededa management system

So this was the situation when NASA moved into the cloud so you cansee that all those AWS accounts and virtual private cows

They got accumulated and madea huge chunk of data and everyone was excessing directly to it

So there is a problem in managingthe credentials for all the users and the different teams, but NASA needed was dividedup all their inventories all the resources into groups and number of hosts

And alsothey wanted to divide up all the users in two different teams and give each team differentcredentials and permissions

And also if you look in the more granular level each userin each team could also have different credentials and permissions

Let's say that you want togive the team leader of a particular team access to some kind of data what you don'twant the other users in the team to access that data

So also NASA wanted to Define differentcredentials for each individual member as well the wanted to divide up all the dataaccording to the projects and jobs also now, so I wanted to move from chaos into a moreorganized Manner and for that they adopted ansible tower now ansible Tower is ansiblein and more enterprise-level ansible Tower provides you with the dashboard which providesall the status summary of all the hosts and job and simple Tower is a web-based interfacefor managing your organization

It provides you with a very easy to use user interfacefor managing quick deployments and monitoring all the configurations

So, let's see whatanswer build our did it has the credential management system which could give differentaccess permission to each individual user and Teams and also divided up the user intoteams and single individual users as well and it has a job assignment system and youcan also assign jobs using ansible tower X suppose

Let's say that you have assignedjob one to a single user job to another single user while job to could be assigned to a particularteam


The whole inventory was also managed all the servers

Let's say dedicatedto a particular mission was grouped together all the host machines and other systems aswell Sansa built our help NASA to organize everything now, let us take a look at thedashboard that ansible Tower provides us

So this is the screenshot of the dashboardat a very initial level

You can see right now there is zero host

Nothing is there butI'm just showing you what ansible tower provides you so on the top you can check all the usersand teams

You can manage the credentials from here

You can check your different projectsand inventories

You can make job templates and schedule job

As well

So this is whereyou can schedule jobs and provide every job with a particular ID so that you can trackit

You can check your job status here whether your job was successful or failed and sinceansible Tower is a configuration management system

It will hold the historical data aswell

So you can check the job statuses of the past month or the month before that

Youcan check the host status as well

You can check how many hosts are up and running youcan see the host count here

So this dashboard of ansible tower provides you with so muchease of monitoring all your systems

So it's very easy to use ansible to our dashboardanyone in your company anyone can use it because it's very user-friendly now, let us see theresults that NASA achieved after it has used ansible tower now updating nasa

gov used totake one hour of time and after using ansible it got down to just five minutes securitypatching updates where a multi-day process and now it requires only 45 minutes the provisioningof os accounts can be done in just 10 minutes earlier the application Stack Up time requiredone to two hours and now it's done in only 10 minutes

It also achieved a near real-timeRAM and this monitoring and baselining all the standard Amazon machine image has thisused to be a one-hour manual process

And now you don't even need manual interferencefor that

It became a background invisible process

So you can see that how ansible hasdrastically changed the overall management system of NASA

So guys, I hope that if understoodhow I answered will help NASA

If you have any question, you may ask me at any time onthe chat window

So let us proceed to the next topic

Now this was all about how othershave used ansible

So now let us take a look at the ansible architecture so that we canunderstand more about ansible and decide how we can use ansible

So this is the overallansible architecture

I've got the answer

Automation engine and I've got the inventoryand a Playbook inside the automation engine

I've got the configuration management databasehere and host and this configuration management database is a repository that acts as a datawarehouse for all your it installations

It holds all the data relating to the collectionof your all it assets and these are commonly known as configuration items and it also holdsthe data which describe the relationships between such assets

So this is a repositoryfor all your configuration management data and here I've got the ansible automation engine

I've got the inventory year and inventory is nothing but the list of all the IP addressesof all my host machines now as I told you how to use configuration management you useit with the configuration management tool like ansible but how do you use ansible? Well,you do that using playbooks

And playbooks describe the entire workflow of your system

Inside playbooks

I've got modules apis and plugins now modules are the core files nowplay books contain a set of place which are a set of tasks and inside every task

Thereis a particular module

So when you run a play book, it's the modules that actuallyget executed on all your node machines

So modules are the core files and like I toldyou before ansible already comes with inbuilt modules, which you can use and you can alsocustomize them as well as comes with different Cloud modules database modules

And don'tworry

I'll be showing you how to use those modules in ansible and there are differentapis as well

Well API is an answerable are not meant for direct consumption

They'rejust there to support the command line tools

For example, they have the python API andthese apis can also be used as a transport for cloud services, whether it's public orprivate you can use it then I've got plugins now plug in Our special kind of module thatallowed to execute ansible task as job Bill step and plugins are pieces of code that augmentthe ansible score functionality and ansible also comes with a number of Handy pluginsthat you can use

For example, you have action plugins cash plugins callback plugins andalso you can create plugins of your own as well

Let me tell you how exactly differentit is from a module

Let me give you the example of action plug-in now action plug in our front-endmodules and what it does is that when you start running a Playbook something needs tobe done on the control machine as well

So this action plugins trigger those action andexecute those tasks in the controller machine before calling the actual modules that aregetting executed in the Playbook

And also you have a special kind of plug-in calledThe Connection plug in which allows you to connect to the docker containers in your notemachine and many more and finally I have this host machine that is Elected via SSH and thiswas machines could be either windows or Linux or any kind of machines

And also let me tellyou that it's not always needed to use SSH for connection

You can use any kind of networkAuthentication Protocol you can use Kerberos and also you can use the connection pluginsas well

So this is fairly a very simple ansible architecture

So now that you've understoodthe architecture, let us write a play book now now let me tell you how to write a playbook and playbooks and ansible are simple files written in HTML code and yambol is adata serialization language

You can think of data serialization language as a translatorfor breaking down all your data structure and serialize them in a particular order whichcan be reconstructed again for later use and you can use this reconstructed data structurein the same environment or even in a different environment

So this is the control machinewhere ansible will be installed and this is where you'll be writing your playbooks

Letme show you the structure of how to write a play book

However, play book starts withthree dashes on the top

So first you have to mention the list of all your host machineshere

It means where do you want this Playbook to run? Then you can mention variables bygathering facts, then you can mention the different tasks that you want

Now rememberthat the task get executed in the same order that you write them

For example, if you wantto install software a first and then softer beef later on

So make sure that the firsttask would be install software and the next task would be install software be and thenI've got handlers at the bottom

The handlers are also tasks but the difference is in orderto execute handlers

You need some sort of triggers in the list of tasks

For example,we use notify

I'll show you an example now

Okay, let me show you an example of Playbookso that you can relate to this structure

So this is an example of an ansible Playbookto install Apache like I told It starts with three dashes on the top remember that everylist starts with a dash in the front or a - here

I've only mentioned just the nameof one group

You can mention the name of several groups where you want to run yourplaybook

Then I've got the tasks you give a name for the task which is install Apacheand then you use a module here

I'm using the app module to download the package

Sothis is the syntax of writing the app module

So you give the name of the package whichis Apache to update cache is equal to yes

So it means that it will make sure that appget is already updated in your note machine before it installs the Apache 2 and you mentionedState equal to latest

It means that it will download the latest version of Apache 2

Andthis is the trigger because I'm using handlers you're right and the Handler here is to restartApache and I'm using the service module here and the name of the software that I want torestart is Apache

And state is able to restart it

So notify have mentioned that there isgoing to be a Handler whose job would be to restart Apache 2 and then the task in theHandler would get executed and it will restart Apache 2

So this is a simple Playbook andwill also be writing similar kind of playbooks later on the Hands-On part

So you'll be learningagain

So if it's looking a little gibberish for you will be doing and that on the Hands-Onpart so then it will clear all your doubts

So now let us see how to use ansible and understandits applications so we can use ansible for application deployment configuration managementsecurity and compliance provisioning and orchestration

So let us take a look at them one by one first

Let us see how we can use ansible for orchestration

Well orchestration means let's say that wehave defined configurations for each of my systems, but I also need to make sure howthis configurations will interact with each other

So this is the process of Orchestrationbut I decide that how the different configurations on different of my systems and my infrastructurewould interact with each other in order to maintain a seamless flow of my applicationand your application deployments need to be orchestrated because you've got a front-endand back-end Services

You've got databases you've got monitoring networks and storageand each of them has their own role to play with with their configuration and deploymentand you cannot just run all of them is ones and expect that the right thing happens

Sowhat you need is that you need an orchestration tool that all this task happen in the properorder that the database is up before the backend server and the front end server is removedfrom the load balancer before it gets upgraded and that your networks would have their propervlans configured

So this is what ansible helps you to do

So, let me give you a simpleexample so that you can understand it better

Let's say that I want to host a website onmy node machines

And this is precisely what we're going to do later on the Hands-On part

So first and in order to do that first, I have to install the necessary software, whichis the lamp stack and after that I have to deploy all the HTML and PHP files on the webserver

And after that I'll be gathering some kind of information from my web pages thatwill go inside my database server

Now, if you want to perform these all tasks, you haveto make sure that the necessary software is installed first now, I cannot deploy the HTMLPHP files on the web servers

If I don't have a web servers if a party is not installed

So this is orchestration where you mention that the task that needs to be carried outbefore and the task that needs to be carried out later

So this is what ansible playbooksallow you to do


Let's see what provisioning is like provisioning in English means to providewith something that is needed

It is same in case of ansible it

That ansible will makesure that all the necessary software is that you need for your application to run is properlyinstalled in each of the environments of your infrastructure

Let us take a look at thisexample here to understand what provisioning actually is

Now if I want to provision apython web application that I'm hosting on Microsoft Azure and Microsoft is your is verysimilar to AWS and it is also a cloud platform on which you can build up all your applications

So let's say so now if I want to host my if I'm developing a python web application forcoding I would need the Microsoft is your document database

I would need Visual Studioor need to install python also and some kind of software development kit and differentapis for that so ansible so you can list out the name of all the software development kitsand all this necessary software's that you will require for coding this web that it wouldrequire in order to develop your web application

So you can list out all the necessary softwareis that you'd be needing in ansible playbook in order to develop your web application andfor testing your code out you will again need Microsoft Azure document database you wouldagain note visual studio and some kind of testing software

So again, you can list outall the software's and ansible Playbook and it will provision your testing environmentas well

And it's the same thing while you're deploying it on the production server as welland Sybil will provision your entire application at all stages at coding stage a testing andat the production stage also, so guys, I hope you've understood what provisioning is letus move on to the next topic and see how we can achieve configuration management withansible now ansible configurations are simple data descriptions of your infrastructure,which is both human readable and machine possible and app server requires

Nothing more thanan SSH key in order to start managing systems and you can start managing them without installing

Any kind of agent or client software? So you can avoid the problem of managing the managementwhich is very common in different automation systems

For example, I've got my host machinesand Apache web servers installed in each of the host machines

I've also got PHP and MySQLinstalled if I want to make configuration changes if I want to update a party and updatemy MySQL I can do it directly

I can push those new configuration details directly ontomy host machines or my note machines and my server and you can do it very easily usingansible playbooks

So let us move on to the next topic and let us see how applicationdeployment has been made easier with ansible now ansible is the simplest way to deployyour applications

It gives you the power to deploy all your multi-tier applicationswhere reliably and consistently and you can do it all from a common framework

You canconfigure all the needed Services as well as push application artifacts from one system

With ansible you can write Play books which are the description of the desired state ofyour system and it is usually kept in the source control sensible

Then does all thehard work for you to get your systems to the state

No matter what state they are currentlyin and play books make all your installations all your upgrades for day-to-day management,very repeatable

So with ansible you can write Play books which are the descriptions of thedesired state of the systems

And these are usually kept in the source control and simplethen does all the hard work for you to get all your systems in the desired State no matterwhat state they're currently in and playbooks make all your installations your upgradesand for all your day-to-day Management in a very repeatable and reliable way

So let'ssay that I am using a version control system like get while I'm developing my app

Andalso I'm using Jenkins for continuous integration now Jenkins will extract code from get everytime there is a new Commit and then making software built and later

This build willget deployed in the test server for testing

Now if changes are kept making in the codebase continuously

You would have to configure your test and the production server continuouslyas well according to the changes

So what ansible does is that it continuously keepson checking the Version Control System here so that it can configure the test and theproduction server accordingly and quickly and hence

It makes your application deploymentlike a piece of cake

So guys, I think you have understood the application deployment

Don't worry in the Hands-On part will also be deploying our own applications on differentservers as well

Now, let us see how we can achieve security with ansible in today's complex

It environment security is Paramount you need security for your systems you need securityfor your data and not just your data your customers data as well

Not only you mustbe able to Define what it means for your systems to be

You also need to be able to Simplyapply that security and also you need to constantly monitor your systems in order to ensure thatthey remain compliant with that security and with ansible

You can simply Define securityfor your systems using playbooks with playbooks

You can set up firewall rules

You can logdown different users or groups and you can even apply custom security policies as wellnow ansible also works with the Mind Point Group which rights and civil rules to applythese aesthetic now disa stick is a cybersecurity methodology for standardizing security protocolswithin your network servers and different computers

And also it is very compliant withthe existing SSH and win RM protocols

And this is also a reason why ansible is preferredover other configuration management tools and it is also compatible with different securityverification tools like opens Gap and stigma what tools like opens cap and stigma doesis that it carries out a timely inspection

All your software inventory and check forany kind of vulnerabilities and it allows you to take steps to prevent those attacksbefore they actually happen and you can apply the security over your entire infrastructureusing ansible

So, how about some Hands-On with ansible? So let us write some ansibleplaybooks now

So what are we going to do is that we are going to install lamp stackand then we're going to host a website on the Apache server and will also collect somedata from our webpage and store it in the MySQL server

So guys, let's get started

So here I'm using the Oracle virtualbox manager and here I've created two virtual machines

The first is the ansible control machine and the ansible host machine

So ansible controlmachine is the machine where I have installed and simple and this is where I'll be writingall my playbooks and answer will host one here is going to be my note machine

Thisis where the playbooks are going to get deployed

So in this machine, I'll deploy my website

So I'll be hosting a website in the answer will host one

Just go to my control machineand start writing the playbooks

So this is my ansible control machine


Let's goto the terminal first

So this is the terminal of my ansible control machine

And now I'vealready installed ansible here and I've already made an SSH connection with my note machine

So let me hear just become the root user first now, you should know that you do not alwaysneed to become the root user in order to use ansible

I'm just becoming the root user formy convenience because I like to get all the root privileges while I'm using ansible, butyou can pseudo to any user if you like So let me clear my screen first

Now before westart writing play boo status first check the version of ansible that is installed here

And for that I'll just use the command ansible - - version

And as you can see here thatI have got the ansible two point two point zero point zero version here


Let meshow you my host inventory file since I've got only one node machine here

So I'm goingto show you where exactly the IP address of my node machine is being stored

So open thehosts file for you now, so I'm just going to open the file and show it to you

So I'musing the G edit editor and the default location of your host inventory file is at sea

I'msupposed / posts

And this is your host inventory file and now have mentioned the IP addressof my host machine here, which is one

Ninety two point one sixty eight point 56

1 02 andI have named it under the group name test servers

So always write the name of yourgroup under the square brackets now, I just have one node machine

So there is only oneIP address

If you have many node machines, you can just let us down the IP address underthis line

It's as simple as that or if you even want to group it under a different name,you can use a different name use another square bracket and put a different name for anotherset of your hosts


Now, let me clear my screen first

So first, let me just testout the SSH connection whether it's working properly or not using ansible

So for thatI'll just type in the command and Sybil and pink and then the name of the group of myhost machines, which is test servers in my case

And thank changed to Paul

It meansthat an SSH connection is already established between my control machine and my note machine

So we are all ready to write playbooks and start deploying it on the notes

So the firstthing that I need to do is write a provisioning Playbook now, since I'm going to host a website,I would first need to install the necessary software's so I'll be writing a provisioningplaybook for that and out provision my node machine using lamp stack

So let us writea Playbook to install lamp stack on my Note machine now, I've already written that Playbook

So I'm just going to show it to you

I'm using the Gia did editor again and the name of myprovisioning playbook is lamp stack

And the extension for AML file is Dot

Yml, and thisis my playbook


Let me tell you how I have written this Playbook as I told you thatevery play book starts with three dashes on the top

So here are the three dashes andthen I've given a name to this Playbook which is to install Apache PHP and MySQL

Now, I'vealready got the L in my lamb because I'm using a Ubuntu machine which is a Linux operatingsystem

So I need to install Apache PHP and MySQL now and then you have to mention thehost here on which you want this Playbook to get deployed

So I've mentioned this overhere and then I want to escalate my privileges for which I'm using become and become userit is because sometimes you want to become another user different from what you are actuallylogged into the remote machine

So you can use escalating privileges tools like so orpseudo to gain root privileges

And so and that is why I've used become and become userfor that

So I'm becoming the user root and I'm using become true here on the top

Whatit does is that it activates Your Privilege escalation and then you become the root useron the remote machine and then gather facts true

Now, what it will do is that we gatheruseful variables about the remote host

Now what exactly it will gather is some sort offiles or some kind of keys which can be used later in a different Playbook

And as youknow that every Playbook is a list of tasks that you need to perform

So this is the listof all my tasks that I'm going to perform and since it's a provisioning Playbook, whichmeans I'm only installing the necessary softwares

That will be needed in order to host a websiteon my Note machine

So first I'm installing Apache so given the task name as install apache2and then I'm using the package module here

And this is the syntax of the package module

So you have to first specify the name of the package that you are going to download whichis Apache 2 and then you put State equal to present now since we're installing somethingfor the first time and it won't this package to be present in your node machine

So you'reputting State equal to present now similarly if you want to delete something you can putState equal to absent and it works that way so I've installed in Apache PHP module andI've installed PHP client PHP Emperor PHP GD library of install a package PHP MySQL

And finally, I've installed the MySQL server in the similar way that I've installed a partyto this is a very simple Playbook to provision your node machine and actually all the playbooksare simple

So I hope that you have understood how to write a Book now, let me tell you somethingthat you should always keep in mind while you were writing playbooks make sure thatyou are always extra careful with the indentation because Gamal is a data serialization languageand it differentiates between elements with different indentations

For example, I'vegot a name here and a name here also, but you can see that the indentations are differentit is because this is the name of my entire Playbook while this is just the name of myparticular task

So these two are different things and they need to have different indentationsthe ones with the similar indentations are known as siblings like this one

This is alsodoing the same thing

This is also installing some kind of package and this is also installingsome kind of package

So these are similar, so that's why you should be very careful withindentation

Otherwise, it will create a problem for you

So what are we waiting for? Let usrun this Playbook clear my screen first

So in order to run a play book and the commandthat you should be using to run an answerable Playbook is ansible - Playbook And then thename of your file, which is lamp stack dot Jama and here we go

And here it is

Okaybecause it is able to connect to my note machine

Apache 2 has been installed

And it's done

My playbook is successfully run

And how do I know that? I know that seeing these commonreturn values

So these common return values like okay changed unreachable and fate

Theygive me the status summary of how my playbook was run

So okay equal to 8, it means therewere eight tasks

That was Run Okay changed equal to 7

It means that something in mynote between has been changed because obviously I've install new packages into my note machine

So it's showing changed equal to 7 unreachable is equal to 0 it means that there is zerohost that were unreachable and failed equal to 0 it means that zero tasks where fate somy playbook was run successfully on to my note between

So let us check my note machineand see if Apache and MySQL has been installed

So let us go to my node machine now

So thisis my node machine

So let us check knife

Apache server has been installed

So I'm goingto my web browser

So this is my web browser in my note machine

Let me go to the LocalHost and check if Apache web server has been downloaded and it's there

It works


This is the default web page of apache2 web server

So now I know for sure that Apachewas installed in my note machine now

Let us see if MySQL server has been installed

Let me go to my terminal

This is the terminal of my load machine


If you want to checkif MySQL has installed just use this following command

mice ql user is root then - p sudopassword password again for MySQL and there it is

So MySQL server was also successfullyinstalled in my note machine

So let's go back to my control machine and let's do whatis left to do

So we're back into our control machine


I've already provisioned mynote machine

So let's see what we need to do next now since we are deploying a websiteon the Node machine, let me first show you how my first web page looks like let me firstshow you how my first web page looks like so this is going to be my first web page whichis index dot HTML and I've got two more PHP files also this salvi actually deploying thesefiles onto my node machine

So let me just open the first webpage to you

So this isgoing to be my first web page

And what I'm going to do is that I'm going to ask for nameand email because this is a registration page for at Eureka where you have to register withyour name and email and I want this name and email to go into my database

So for thatI need to create a database and also need to create a table for this name and emaildata to store into so for that will write another play book and we'll be using databasemodules in that clear the screen first now again, I've already written that Playbook

So let me just show it to you

So using the G edit editor here again and the name of thisPlaybook is my school module


So this is my playbook

So like all Playbook it startswith three dashes and here I have mentioned host all now

I just have only one host

Iknow I could have mentioned either the only one IP address directly or even given thename of my group but I've written just all your so that you can know that if you hadmany group names or you have many notes and you want this Playbook to run on all of yournode machines, you can use this all and this Playbook will get deployed on all your notemachines

So this is another way of mentioning your hosts and I'm using remote user rootand this is another method to escalate your privileges

It's similar to become and becomeuser

So on the remote user to have root privileges while this Playbook would run and then thelist of the tasks and so what I'm doing in this Playbook is that since I have to connectto my MySQL server, which is present in my note machine

I need a particular softwarefor that which is the MySQL python module and I'm Download and install it using tipnow dip is the python package manager with which you can install and download pythonpackages

But first, I need to install Pippin my note machine

So since I told you thatthe tasks that you write in a Playbook it gets executed in the same order that you writethem

So my first task is to install pip and then I'm using the app module here here

I'vegiven the name of the package which is python bit and state equal to present and after that

I'm installing some other software's using bit and I'm stalling some other related software'sas well

I'm also installing Library - with blind deaf

And after that using pip, I'minstalling the MySQL python module now notice that so you can consider this as an orchestrationPlaybook because here I'm making sure that pip has to get installed first and after papersinstalled I'm using pip to install another python package

So you see what we did hereright and then I'm going to use the database modules for Getting a new user to access thedatabase and then I'm creating the database named a do so for creating a MySQL user

I'veused the MySQL user database module that ansible comes with and this is the syntax of the MySQLuser module recreate the name of the new user which is edureka, you mentioned the passwordand the preview here

It means what privileges do you want to give it to the new user andhere I'm granting all privileges for all database

And since you're creating it for the firsttime and you want state to be present

Similarly, I'm using the mysqldb module to create a databasein my MySQL server named ed you so this is the very simple syntax of using mysqldb module

We have to just give the name of the database in DB equal to and state equal to present

So this will create a database named Eddie also and after that I also need to createa table inside the database for storing my name and email details, right and and unfortunatelyansible does not have any MySQL table creating modules

So what I did is that I've used aCommand Module here

We Command Module and directly going to use my SQL queries to createa table and the syntax is something like this so you can write it down or remember it ifyou want to use it

So for that since I'm writing a MySQL Query I started with mySQLuser Eddie wake up the - us for the user and then for password Etc

Wake up

Now after- e just write the query that you need to execute on the MySQL server and write it insingle quotations

So I have written the query to create a table and this is create tableare EG the name the email and then after that just mention the name of the database on whichyou want to create this table, which is a do for me

So this is my orchestration PlayBook

Clear my screen first

The command is ansible - Playbook and the name of your play book,which is MySQL modding

And here we go

Again, my common return values tell me that the Playbookwas done successfully because there are no fail task and no unreachable host and thereare change task in my note machine

So now all the packages are downloaded now, my nodemachine is well provisioned

It's properly orchestrated


What are we waiting for?Let's deploy your application

Well clear the screen first

So now let me tell you whatexactly do we need to do in order to deploy my application and in my case, these are justthree PHP files and HTML files that I need to deploy it on my Note machine in order todisplay this HTML files and PHP files on my web server in my note machine

What I needto do is that I need to copy this files from my control machine to the proper locationin my notebook machine and we can do that using playbooks

So let me just show you thePlaybook to copy files

And the name of my father is deployed website

So this is myplaybook to deploy my application and here again, I've used the three dashes and thenthe name of my playbook is copy the host as you know that it's going to be test servers

I'm using privilege escalation again, and I'm using become and become user Again TheGather facts again true

And here is the list of the task the task is to just copy my filefrom my control machine and paste it in my destination machine, which is my node machineand for that and for copying I've used a copy module and copy module is a file module thatansible comes with so this is the syntax of the copy module here

You just need to mentiona source and source is the path where my file is contained in my control machine, whichis home at Eureka documents

And the name of the file is index dot HTML, and I wantedto go too far www HTML and it's index dot HTML, so I should be copying my files

Intothis location in order for it to display it on the web page and similarly have copiedmy other PHP files using the same copy module

I've mentioned the source and destinationand copying them to the same destination from the same source

So I don't think any of youwould have questions here

This is the most easiest Playbook that we have written today

So let us deploy our application now and for that we need to run this play book and beforethat we need to clear the screen because there are a lot of stuff on our screen right now

So let's run the Playbook

And here we go, and it was very quick because there was nothingmuch to do

You just have to copy files from one location to another and these are verysmall files

Let us go back to our host machine and see if it's working

So you're back againat our host machine

Let's go to my web browser to check that

So let me refresh it and thereit is

And so here is my first web page

So my application was successfully deployed

So now let us enter our name and email here and check if it is getting entered in my database

So let's put our name and the email

It's why z

com and add it so new record createdsuccessfully

It means that it is getting inserted into my database

Now, let's go backand view it and there it is

So congratulations, you have successfully written playbooks todeploy your application your provision your node machines using playbooks and orchestratedthem using playbooks now, even though at the beginning it seemed like a huge task to doand so we'll play both made it so easy

Hello everyone

This is Saurabh from Edureka intoday's session will focus on what his puppet

So without any further Ado let us move forwardand how look at the agenda for today first

We'll see why we need configuration managementwhile the various problems are industries were facing before configuration managementwas introduced after that will understand what exactly is configuration management andwe'll look at various configuration management tools after We'll focus on puppet and we'llsee the puppet architecture along with the various puppet components and finally in ourhands on part will learn how to deploy my SQL and PHP using puppet

So I'll move forwardand we'll see what are the various problems before configuration management

So this isthe first problem guys, let us understand this with an example suppose

You are a systemadministrator and your job is to deploy mean stack say on four nodes

All right means darkis actually Mongo DB Enterprise angularjs and node

js so you need to deploy means darkon four notes that is not a big issue

You can manually deploy that and four nodes butwhat happens when your infrastructure becomes huge you may need to deploy the same meanstax a on hundreds of notes

Now, how will you approach the task? You can't do it manuallybecause if you do it manually, it'll take a lot of time plus they will be wastage ofresources along with that

There is a chance of human error

I mean, it increases the riskof human error

All right, so we'll take the same example forward

And we'll see what arethe other problems before configuration management

Now, this is the second problem guys

So it'sfine like you in the previous step you have deployed means that one hundreds of nodesmanually

Now what happens there is an updated version of Mongo DB available and your organizationwants to shift that updated version

Now, how will you do that? You want to go to theupdated version of Mongo DB? So what you'll do you'll actually go and manually updatemongodb on all the nodes in your infrastructure

Right? So again, that will take a lot of timebut now what happens that updated version of the software has certain glitches yourcompany wants to roll back to the previous version of the software, which is mongo DBin this case

So you want to go back to the previous version

Now, how will you do that?Remember you have not kept the historical record of Mongo DB during the updating

Imean you have updated mongodb biannually on all the nodes

You don't have the record ofthe previous version of Mongo DB

So what you need to do you need to go and manuallyReinstall mongodb on all the nodes

So rollback was a very painful task

I mean it used totake a lot of time


This is the third problem guys over here what happens you haveupdated mongodb in the previous step on say development environment and in the testingenvironment, but when we talk about the production environment, they're still using the previousversion of mongodb

Now what happens there might be certain applications that work thatare not compatible with the previous version of mongodb All right

So what happened developerswrite a code and that works fine in his own environment or beat his own laptop after that

It works fine till testing is well

Now when it reaches production since they're usingthe older version of Mongo DB which is not compatible with the application that developershave built so it won't work properly there might be certain functions which won't workproperly in the production environment

So there is an inconsistency in the Computingenvironment due to which the application might work in the development environment, but inproduct it is not working properly

Now what I'll do, I'll move forward and I'll tell youhow important configuration management is with the help of a use case

So configurationmanagement

Add New York Stock Exchange

All right

This is the best example of configurationmanagement that I can think of what happened a software glitch prevented the New York StockExchange from Trading stocks for almost 90 minutes this led to millions of dollars ofloss a new software installation caused the problem

The software was installed on 8 ofits twenty trading Terminals and the system was tested out the night before however inthe morning it failed to operate properly on the a terminals

So there was a need toswitch back to the old software you might think that this was a failure of New YorkStock Exchange has configuration management process, but in reality, it was a successas a result of proper configuration management process NYSE recovered from that situationin 90 minutes, which was pretty fast

Let me tell you guys had the problem continuedlonger the consequences would have been more severe so because the proper configurationmanagement, New York Stock Exchange Painted loss of millions of dollars they were ableto roll back to the previous version of the software within 90 minutes

So we'll moveforward and we'll see what exactly configuration management is

So what is configuration managementconfiguration management is basically a process that helps you to manage changes in your infrastructurein a more systematic and structured way

If you're updating a software you keep a recordof what all things you have updated

What will change is you are making in your infrastructureall those things and how you achieve configuration management you achieve that with the helpof a very important concept called infrastructure as code


What is the infrastructure iscode infrastructure as code simply means that you're writing code for infrastructure

Letus refer the diagram that is present in front of your screen

Now what happens in infrastructureis code you write the code for infrastructure in one central location

You can call it aserver

You can call it a master or whatever you want to call it

All right

Now that codeis deployed onto the dev environment test environment and the product environment

Basicallyyour entire infrastructure

All right, whatever

No, do you want to configure your configurethat with the help of that one central location? So let us take an example

All right supposeyou want to deploy Apache Tomcat say on all of your notes

So what you'll do in one locationwill write the code to install Apache tomcat and then you'll push that onto the nodes whichyou want to configure

What are the advantage you get here

First of all the first problemif you can recall that configuring large infrastructure was very hectic job, but because of configurationmanagement, it becomes very easy how it becomes easy

You just need to write the code in onecentral location and replicate that on hundreds of notes it is that easy

You don't need togo and manually install or update the software on all the nodes

All right

Now the secondproblem was you cannot roll back to the previous table version in time

But what happens here,since you have everything well documented in the central location rolling back to theprevious version was not a time-consuming task

Now the third problem was there wasa variation or inconsistency in Various teams, like Dev team Testament product team likethe environment the Computing environment was a different in-depth testing product

But with the help of infrastructure as code what happens all your three environment thatis there tested product have the same Computing environment

So I hope we all are clear withwhat is configuration management and what is infrastructure is code

So we'll move forwardand we'll see what are the different type of configuration management approaches arethere now, there are two types of configuration management approaches one is push configuration

Another is pull configuration

All right

Let me tell you push configuration first inputconfiguration what happens there's one centralized server and it has all the configurations insideit if you want to configure certain amount of nodes

All right, say you want to configurefor notes as shown in the diagram

So what happens if you push those configuration tothese nodes there are certain commands that you need to execute on that particular centrallocation and with the help of that command those are configurations, which are presentwill be pushed onto the nodes now, Let us see what what happens in pull configurationin pull configuration

There is one centralized server, but it won't push all the configurationson to the nodes what happens nodes actually posed the central server at say 5 minutesor 10 minutes basically at periodic intervals

All right, so it will pose the central serversfor the configurations and after that it will pull the configurations that are there inthe central server so over here, you don't need to execute any command nodes will addautomatically pull all the configurations that are there in the centralized server andpop it in Chef both uses full configuration

But when you talk about push configurationansible unsourced accuses push configuration, so I'll move forward and we'll look at variousconfiguration management tools

So these are the four of most widely adopted tools forconfiguration management

I have highlighted puppet because in this session, we are goingto focus on puppet and it uses pull configuration and when we talk about Saul stock, it usespush configuration, so does ansible ansible also uses push

Listen Chef also uses thepulley configuration

All right, so pop it and chef uses pull configuration, but ansibleand solve Stark uses push configuration

Now, let us move forward and see what exactly puppetis

So pop it is basically a configuration management tool that is used to deploy a particularapplication configure your nodes and manager service

Like they can possibly take yourservers online and offline as required configure them and deploy a certain package or an applicationonto the node

So right with the help of puppet, you can do that with ease and the architecturethat it uses master-slave architecture

Let us understand this with an example

So thisis Puppet Master over here

All the configurations are present and these are all the puppet agents

All right, so these puppet agents pole the central or the Puppet Master at regular intervalsand whatever configurations are present

It will pull those configuration basically

Solet us move forward and focus on the Puppet Master Slave architecture now, this is a Alsoslave architecture guys over here what happens the puppet agent or the puppet node sendsfacts to the puppet master and these facts are basically a key value our data pair thatrepresents some aspect of slave state that aspect can be its IP address time operatingsystem or whether it's a virtual machine and then Factor gathers those basic informationabout puppet slave such as Hardware details network settings operating system type andversion IP addresses Mark addresses all those things

Now these parts are then made availablein Puppet Masters manifest as variables now Puppet Master uses those facts that it hasreceived from the puppet agent or the puppet node to compile a catalog that catalog defineshow the slave should be configured and at the catalog is a document that describes adesired state for each resource that Puppet Master manages, honestly, so it is basicallya compilation of all the resources that Puppet Master applies to a given slave as well asat the relationship between Those resources so the catalog is compiled by the puppet masterand then it is sent back to the node and then finally slave provides data about how it hasimplemented that catalog and if sandbags our report

So basically the node or the agentsends the report back that the configurations are complete and they can actually view thatin the puppet dashboard as well

Now what happens is the connection between the nodeor the puppet agent and the puppet master happens with the help of SSL secure encryption

All right, we'll move forward and we'll see how actually the connection between the puppetmaster and puppet node happens

So this is how puppet master and slave connection happenswhat happens first of all the puppets slave it requests for the Puppet Master certificate

All right

It sends a request to the master certificate and once Puppet Master receivesthat request it will send the master certificate and once puppet slave has received the mastercertificate Puppet Master will again send a request to the slave regarding the its owncertificate

All right

So it will request a for the puppet agent to send its own certificate

The puppet slave is generate its own certificate and send it to Puppet Master

Now what puppetmaster has to do puppet master has to sign that certificate


So once it hassigned the certificate puppet slave can actually request for the data

All right all the configurationsand then finally Puppet Master will send those configurations on to the puppets late

Thisis how puppet master and slave communicates

Now, let me show you practically how thishappens

I have installed puppet master and puppet slave on my sent to West machines

All right, I'm using 2 virtual machines 14 puppet master and another for puppet sleep

So let us move forward and execute this practically now, this is my Puppet Master virtual machineover here

I've already created a puppet master certificate, but there is no puppet agentcertificate right now and how will you confirm that there is a command that is puppet

Thirdlist and it will display all the certificates that are pending in puppet master

I meanthat are pending for the approval from the master

All right, so currently there areno certificates available

So what I'll do is I'll go to my puppet agent and I'll fetchthe Puppet Master certificate which are generated earlier and at the same time generate thepuppet agent certificate and send it to master for signing it

So this is my puppet agentvirtual machine now over here as I've told you earlier as well

I'll generate a puppetagent certificate and at the same time I'll fetch the Puppet Master certificate and thatagent certificate will be sent to puppet master and it will sign that puppet my agent certificate

So let us proceed with that for that

I'll type up it agent - t and here we go

All right,so it is creating a new SSL key for the puppet agent as you can see in the logs itself

Soit has sent a Certificate request and this is the fingerprint for that

So exiting nocertificate found and wait for sword is disabled

So what I need to do is I need to go backto my Puppet Master virtual machine and the signed this particular certificate that isgenerated by puppet agent

Now over here if you want to see the list of certificates,what do you need to do? You need to type up it so at least I have told you earlier aswell

So let us see what all certificates are there now, so as you can see that thereis a certificate that has been sent by puppet agent

All right, so I need to sign this particularsort of again

So for that what I will do I'll type pop it

Search sign on the nameof the certificate that is puppet agent and here we go

So that successfully signed thecertificate that was requested by puppet agent

Now what I'll do, I'll go back to my puppetagent virtual image and over there

I'll update the changes that have been made in the PuppetMaster

Let me first clear my terminal and now again, I'll type puppet agent - tea

Allright, so we have successfully established a secure connection between puppet masterand puppet agent


Let me give you a quick recap of what we have discussed a lot first

We saw what are the various problems before configuration management be focused on threemajor problems that were there

All right

And after that we saw how important configurationmanagement is with the help of a use case of New York Stock Exchange

And finally wesaw what exactly configuration management is

And what do you mean by infrastructureis code

We also looked at various configuration management tools are namely Chef puppet ansibleand saltstack and after that we understood what exactly pop it is

And what is the master-slavearchitecture that it has and how puppet master and puppet slave communicates

All right,so I'll move forward and we'll see what use case I have for you today

So what we aregoing to do in today's session or we are going to deploy a my SQL and PHP using puppet

Sofor that what I will do, I'll first a download the predefined modules for my dad

SQL andPHP that are there in the puppet Foods

All right, those modules will actually Definethe two classes that is PHP and MySQL

Now you cannot deploy the class directly ontothe nodes

So what do you need to do? When you in puppet Boniface you need to declarethose classes, whatever class you have defined

You need to declare those classes

I'll tellyou what our manifest modules you don't need to worry about that

I'm just giving a generaloverview of what we are going to do in today's session

So you just need to declare thosetwo classes at as PHP and MySQL and finally just deploy that onto the nose it is thatsimple guys

So as you can see that there will be a code for PHP and MySQL from thatPuppet Master, it will be deployed onto the nose or the puppet agents will move forwardand we'll see what are the various phases in which will be implementing the use case


So first we'll define a class has all right classes are nothing but the collectionof various resources

How will do that will do that with the help of modules that willactually download a module from the puppet

Boat and we'll use that module that defineswho classes as I've told you PHP and MySQL and then I'm going to declare that class inthe Manifest and finally deploy that onto the nodes

All right

So let us move forwardand before actually doing this it is very important for you to understand certain basicsof pop it like code basics of puppet like what our classes resources manifest modulesall those things

So we'll move forward and understand those things one by one


Whathappens is first of all, I'll explain you resources classes manifests in modules separately

But before that, let me just give you an overview of what are these things? All right, how dothey work together? So what happens there are certain resources or write a user is aresource of pile is a resource

Basically anything that is there can be considered asa resource

So multiple resources actually combine together to form a class

So now thisclass you can declare it in any of the benefits that you want

You can declare it in multiplemanifests

All right, and then finally you can bundle all These manifest together toform a module


Let me tell you guys it is not mandatory that with you will combinethe resources and define a class

You can actually deploy the resources directly

Itis a good practice if you combine the resources in the form of classes because it becomeseasier for you to manage the same goes for manifest as well

And I'll tell you how todo that as well

You can write a puppet code and deploy that onto the nodes and at thesame time it is not necessary for you to bundle the Manifest that you are using in the formof modules

But if you do that, it becomes more manageable and it becomes more structured

All right, so it becomes easier for you to handle multiple manifests

All right

So letus move forward and have a look at what exactly are resources and what our class is in puppet

Now what our resources anything that is there is a resource a user is a resource other toldyou about file can be a resource

Basically anything that is there can be considered asa resource

So puppet code is composed primarily of a resource declarations a resource describessomething about the state of the System it can be such as a certain user or a file shouldexist or a package should be installed now here we have the syntax of the resource

Allright, first you write the type of the resource

Then you give a name to it in the single quotesand various attributes that you want to Define in the example

I've shown you that it willcreate a file that is I need d

com and this attribute will make sure that it is present

So let us execute this practically guys

I'll again go back to my Center as virtual machinenow over here

What I'll do I'll use the G edit editor you can use whatever editor youwant and I'll type the path for my manifest directory and in this directory

I let Definea file

All right and with the dot DB extension, so I'll just name it as a side dot p p andhere we go

Now what head are the resource examples that I've shown you in this light?I will just write the same example and the let us see what happens file open the bracesnow give the path HC

/ I knit DDOT conf Ina DDOT conf

Colon, and antenna, and now I'mgoing to write the attribute, so I'm going to make sure that it is present in sure

TheDefine is created

Etsy I knit / I knit

DDOT conf comma and the now-closed the braces saveit and close it

Now what you need to do

You need to go to the puppet Asian once moreand over there

I'm going to execute agent - t command that will update the changes madein the Puppet Master

Now we're here

I'll use the puppet agent - t command and let ussee if the file I need the dot-coms is created or not

All right, so it has done it successfullynow

What I'll do is just to confirm that I'll use LS command for that

I will typeLS Etsy

Ina DDOT Kant And as you can see that it has been created, right so we haveunderstood what exactly a resources in puppet, right? So now let us see what our classesclasses are nothing but the group of resources

All right, so you group multiple resourcestogether to form one single sauce and you can declare that class in multiple manifestas we have seen earlier

It has a syntax error

Let us see first you need to write class thengive a name to that class open the braces write the code in the body and then closethe brace is it's very simple and it is pretty much similar to the other coding languagesthat you if you if you have come across any other coding languages

It is pretty muchsimilar to the class that you define over there as well

All right, so we have a questionfrom my uncle he's asking can you specify what exactly the difference between a resourceand a class classes are actually nothing but the bundle of resources

All right, all thoseResources Group together forms a class and what you can say is a resource describes asingle

Or a package but what happens a class describes everything needed to configure anentire service or an application? So we'll move forward and we'll see what our manifestso this is puppet manifest now what exactly it is, every slave has got its configurationdetails in puppet master and it is written in the native puppet language

These detailsare written in the language that puppet can understand and that language is termed asmanifests

So this is Manifest all the puppet programs are basically termed as Manifest

So for example, you can write a manifest in puppet master that creates a file and installthe party's over on puppet slaves connected to the Puppet Master

Alright, so you cansee I've given you an example over here

It uses a class that is called Apache and thisclass is defined with the help of predefined modules that are there in puppet port andthen various our tributes like Define the virtual hosts in the port and the root directory,so Basically, there are two ways to actually declare a class in puppet manifest either

You can just write include and the name of the class or you can if you don't want touse a default attributes of that class, you can make the changes in that by using thisparticular syntax that is you write the class open the braces and the class name: whateverchanges or whatever the attributes that you want apart from the one which are there inDeep by default and then finally close the braces

All right

So now I'll execute a manifestpractically that will install Apache on my notes

All right now need to deploy Apacheusing puppets

All right

So what I need to do, I need to write the code to deploy aparta in the Manifest directory

I've already created a file with DOT CPP extension

Ifyou can remember when I was talking about resources, right? So now again, I'll use thesame file that is side b p and I'll write the code to deploy a partay

All right

Sowhat I'll do I'll just I'll use the G editor you can use whatever editor you feel likeit see Pop It manifest and site

Art p p and here we go

Now over here

I'll just deletethe resource that I've defined here

I like my screen to be nice and clean and now I willwrite the code to deploy a party so for that I will tight package

httpd : now I need toensure it is install

So for that I'll type in sure installed

Give a comma Now I needto start this Apache service for that

I'll type service

httpd in short running througha coma now close the braces the save it and close it

Let me clear my terminal

And nowwhat I'll do, I'll go to my puppet agent from there

It will pull the configurations thatare present in my Puppet Master

Now what happens periodically puppet agent actuallypulls the configuration from Puppet Master and it is around 30 minutes, right? It takesaround half an hour after every half an hour puppet agent pulls the configuration fromPuppet Master, right so you can configure that as well

If you don't want to do it justthrow in a command puppet agent - tea and it will automatically pull the configurationsare representing the puppet master

So for that I will go to my puppet agent virtualmachine now here what I'll do, I'll type a command puppet agent - t and let us see whathappens

So it is done now now what I'll do just to confirm that I will open my browser

And over here, I will type the hostname of my machine which is localhost and let us seeif a party is installed

All right, so Apache has been successfully installed now, let usgo back to our slides and see what exactly modules are

So what our puppet modules puppetmodule can be considered as a self-contained bundle of code and data

Let us put it inanother way

We can say that puppet module is a collection of manifest and data suchas Parks files templates Etc

All right, and they have a specific directory structure

Modules are basically used for organizing your puppet code because they allow you tosplit your code into multiple manifest

So they provide you a proper structure in orderto manage a manifest because in real time, you'll be having multiple manifest to managethose manifests

It is always a good practice to bundle them together in the form of modules

So by default puppet modules are present in the directly / HC / puppet / modules, whatevermodules you download from Puppet force will be present in this module directory

All right,even if you create your own modules, you have to create in this particular directory

Thatis / HC / puppet / modules

So now let us start the most awaited topic of today's sessionthat is deploying PHP and my SQL using puppet

Now, what I'm going to do is I'm going todownload the two modules one is for PHP and another is for MySQL

So those two moduleswill actually Define PHP and MySQL class for me now after that I need to declare that classin the Manifest

Then site dot PHP file present in the puppet manifest

So I'll declare thatclass in the Manifest

And then finally, I'll throw in a command puppet agent - teen myagent and it will pull those configurations and PHP and MySQL will be deployed

So basicallywhen you download a module you are defining a class

You cannot directly deploy the classyou need to declare it in the Manifest and I will again go back to my sin to icebox nowover here

What I'll do, I'll download the my SQL module from the puppet forward

Soforth are all type puppet mode

You'll install Puppet Labs

- my sequel - - give the nightversion name so I will use three point one zero point zero and here we go

So what ishappening here as you can see the saying preparing to install into / HC / puppet / modules, right?So it will be installed in this directories apart from that

It is actually downloadingthis from the forge a pi dot puppet labs


So it is done now, that means that successfullyinstall MySQL module from Puppet Fort

All right

Let me just clear my terminal and nowI will install PHP modules for that

I'll type puppet module install

- a PHP - - versionthat is four point zero point zero - beta 1 and here we go

So it is done

Now thatmeans we have successfully installed two modules one is PHP and other is my SQL

All right

Let me show you where it is present in my machine

So what I'll do, I'll just hit anLS command and I'll show you in puppet modules

And here we go

So as you can see that there'sa my SQL module and PHP module that we have just downloaded from Puppet Foods

Now whatI need to do is I have defined by SQL and PHP class, but I need to declare that in thesite dot PHP file present in the puppet manifest

So for that what I will do I'll first usethe G edit editor you can use whatever editor that you want

I'm saying it again and again,but you can use whatever editor that you want

I personally prefer G edit and now manifestside dot p p and here we go

Now as I told you earlier is well, I like my screen to beclean and nice

So I'll just remove this and over here

I will just declare the two classes

That is my secret and PHP

Include my sequel

Server and the next line

I'll include thePHP class for that anti PHP

Just save it now close it

Let me clear my terminal nowwhat I'll do, I'll go to my puppet agent

And from there

I'll hit a command puppetagent - t that will pull the configurations from Puppet Master

So let us just proceedwith that

Let me first clear my terminal and now I'll tie puppet agent - t and herewe go

So we have successfully deployed PHP and MySQL using puppet

All right, let mejust clear my terminal and I'll just confirm it by typing my sequel - we All right, thiswill display the version now as just exit from here and now I'll show you the PHP versionsof adult type PHP - version and here we go

Alright, so this means that we have successfullyinstalled PHP and MySQL using puppet

So now let me just give you a quick recap of whatwe have discussed in love

All right

So first we saw why we need configuration management

What are the various problems that were there before configuration management? And we understoodthe importance of configuration management with a use case of New York Stock Exchange

All right, after that we saw what exactly configuration management is and we understooda very important concept called infrastructure as code

Then we focused on various type ofconfiguration management approaches namely push and pull then we saw various configurationmanagement tools are namely puppet chef ansible and Source tag after that

We focus on popit and we saw what exactly puppet is its Master Slave architecture how puppet master and slavecommunicates all those things then we understood the puppet code Basics

We understood whatour resources what a class is Manifest modules and finally in our hands on part

I told youhow to deploy PHP and MySQL using puppet My name is Sato

And today we'll be talking aboutNagi ways

So let's move forward and have a look at the agenda for today

So this iswhat we'll be discussing

Will Begin by understanding why we need continuous monitoring what iscontinuous monitoring and what are the various tools available for continuous monitoring

Then we are going to focus on Nagi OS we are going to look at its architecture how it works

We are also going to look at one case study and finally in the demo

I will be showingyou how you can monitor a remote host using NRP, which is nothing but nagios remote plug-inexecutor

So I hope you all are clear with the agenda

Let's move forward and we'll startby understanding why we need continuous monitoring

Well, there are multiple reasons guys, butI mentioned for very important reasons why we need continuous monitoring

So let's havea look at each of these one by one

The first one is failure of see ICD pipelines sincedevops is a buzzword in the industry right now

And most of the organizations are usingdevops practices

Obviously, they are implementing see ICD pipelines or it is also called asdigital pipelines right now the idea behind these SED pipeline is to make sure that therelease should happen more frequently and it should be more stable in an automated fashion

Right because there are a lot of competitors you might have in the market and you wantto release your product before them

So agility is very very important

And that's why weuse eicd pipelines

Now when you implement such a pipeline you realize that there can'tbe any manual intervention at any step in the process or the entire pipeline slows down

So you will basically defeat the entire purpose manual monitoring slows down your deploymentPipeline and increases the risk of performance problems propagating in production, right?So I hope you have understood this

If you notice the three points that I've mentionedit's pretty self-explanatory rapid introduction of performance problems and errors, rightbecause you are releasing software and more frequently

So there has to be rapid introductionof performance problems rapid introduction of new endpoints causing monitoring issues

Again, this is pretty self-explanatory then the root cause analysis as a number of servicesexpands because you are releasing software more frequently, right? So definitely thenumber Services are going to increase and there's a lengthy root cause analysis, youknow, because of which you lose a lot of time, right? So let's move forward and we look atthe next reason why we need continuous monitoring

For example, we have an application whichis light, right? We have deployed it on the production server


We are running a p


Solutions which is basically application performance monitoring

We are monitoring our applicationhow the performance is

Is there any down time all those things? Right? And then wefigure out certain issues with our applications on performance issues now to go back basicallyto roll back and to incorporate those changes to remove those bugs developers are goingto take some time because the process is huge because your application is already live,right? You cannot afford any downtime

Now, imagine what if before releasing the softwareon a pre production server, which is nothing but the replica of my production server

Ican run those APM solutions to figure out how my application is going to perform andit actually goes live right so that way whatever issues of their developers will be notifiedbefore and they can take the corrective action

So I hope you have understood my point

Thenext thing is server Health cannot be compromised at any cost

So I think it's pretty obviousguys

Your application is running on a server

You cannot afford any downtime in that particularserver or increase in the response time also, right

So you require some sort of a monitoringsystem to check your server Health as well

Right? What if your application goes downbecause you're so it isn't responding right? So you don't want any scenario like that ina world like today where everything is so Dynamic, and the competition is growing


You want to give best service to your customers, right? And I think so / health is very veryimportant because that's where your application is running guys are not things

I have tostress too much on this right, so we basically require continuous monitoring of a serveras well

Now, let me just give you a quick recap of the things that we have discussed

So we have understood why we need continuous monitoring by looking at three four examples,right? The first thing is we solve what are the issues with see ICD pipeline right? Wecannot have any sort of manual intervention for monitoring in source of bye

Because you'regoing to defeat the purpose of such pipeline

Then we saw that developers have to be notifiedabout the performance issues of the application before releasing it in the market

Then wesaw server Health cannot be compromised at any cost

Right? So these are the three majorreasons why I think continuous monitoring is very important for most of the organization'sright? Although there are many other reasons as well right now

Let's move forward andunderstand what exactly is continuous monitoring because we just talked a lot of scenarioswhere Manuel monitoring or a traditional monitoring processes are not going to be enough

Right?So let us understand what exactly is continuous monitoring and how is it different from whatrelation process so basically continuous monitoring tools resolve any sort of system errors beforethey have any negative impact on your business

It can be low memory unreachable server, etc


Apart from that

They can also monitor the business processes and the applicationas well as your server which we have just discussed

Right? So continuous monitoringis basically an effective system where The entire it infrastructure starting from yourapplication to your business process to your server is monitored in an ongoing way andin an automated fashion, right? That's what basically is the Crux of continuous monitoring

So these are the multiple phases given to us by n is T for implementing continuous monitoringand is is basically National Institute of Standards and technology

So let me just takeyou through each of these stages first thing is defined so in to basically develop a monitoringstrategy, then what you're going to do you are going to establish measures and Matrixand you also going to stablish monitoring and assessment frequencies at how frequentlyare going to monitor it right

Then you are going to implement whatever you have stablishedthe plan that you have laid down

Then you're going to analyze data and report findings,right? So whatever issues that are there you're going to find that pose that you're goingto respond and mitigate that error and finally you're going to review and update the applicationor whatever you were monitoring right now

Let us move forward and patreon is also givenus multiple phases involved in continuous monitoring

So let us have a look at thoseold

So one by one The first thing is continuous Discovery

So contentious Discovery is basicallydiscovering in maintaining near real-time inventory of all networks and informationassets, including hardware and software if I have to give an example basically identifyingand tracking confidential and critical data stored on desktops laptops and servers

Rightnext comes continuous assessment

It basically means automatically scanning and comparinginformation assets against industry and data repositories determine oner abilities

That'sthe entire point of continuous assessment

Right? So one way to do that is prioritizingfindings and providing detailed reports right by Department platform Network asset and vulnerabilitytype next comes continuous audit, so continuously evaluating your client server and networkdevice configurations and comparing them with standard policies is basically what continuesaudit is, right

So basically what you're going to do here is gain insights into problematiccontrols using patterns and access permission of sensitive data

Then comes continuous patching

It means automatically deploying and updating software to eliminate vulnerabilities andmaintain compliance

Right? So if I have to give you an example may be correcting configurationsettings, including network access and provision software according to end users role in policies

All those things next comes continents reporting

So aggregating the scanning results from differentdepartments scan types and organizations into one Central repository is basically what contentis reporting is right for automatically analyzing and correlating unusual activities in compliancewith regulations

So I think it's pretty easy to understand if I have to repeat it oncemore I would say continuous Discovery is basically discovering and maintaining an inventory anear real-time inventory of all the network and information assets

Whether it's yourHardware or software then continuous assessment means automatically scanning and comparingthe information assets from Gardens discovery that we have seen against industry and datarepositories to determine vulnerabilities continuous audit is basically Continuouslyevaluating your client server and network device with configurations and comparing themwith standards and policies Contreras patching is automatically deploying and updating softwareto eliminate vulnerabilities and maintain compliance right patching is basically yourremedy kind of a thing where you actually respond to the threats that you see or vulnerabilitiesthat you see in your application Garden is reporting is basically aggregating scanningresults from different departments scan types are organizations into one Central repository

So these are nothing but the various phases involved in continuous monitoring

Let ushave a look at various continents monitoring tools available in the market

So these arepretty famous tools

I think a lot of you might have heard about these tools one isAmazon cloudwatch, which is nothing but a service provided to us by AWS Splunk is alsovery famous

And we have e LK and argue ways right CLK is basically elastic log stash andCabana in this session

We are going to focus on argue is because it's a pretty mature tolot of companies have used this tool and it has a major market share as well and it'sbasically well suited for your entire it Whether it's your application or server or even it'syour business process now, let us have a look at what exactly is not your ways and how itworks

So now I give which is basically a tool used for continuous monitoring of systemsyour application your services and business processes Etc in a devops culture right nowin the event of failure

Nagios can alert technical staff of the problem allowing themto begin a remedy ation processes before outages affect business processes and users or customers

So I hope you are getting my point

It can allow the technical staff of the problem andthey can begin remediation processes before outages affect their business process or endusers or customers right with the argues

You don't have to explain why an answer ininfrastructure outage affect your organization's bottom line, right? So let us focus on thediagram that is there in front of your screen

So now use basically runs on a server usuallyas a Daemon or a service and it periodically runs plugins residing in the same server whatthey do they basically contact hosts on servers or on your network or on the Internet

Nowone can view the status information using the web interface and you can also receiveemail or SMS notification if something goes wrong, right so basically nagas Damon behaveslike a scheduler that runs certain scripts at certain moments

It stores the resultsof those cribs and we'll run other scripts if these results change

I hope you are gettingmy point here right now

If you're wondering what our plugins of these are nothing butcompiled executables or scripts

It can be pearls great shell script Etc that can runfrom a command line to check the status of a host or a service noun argue is uses theresults from the plugins to determine the current status of the host

And so this ison your network

Now, let us see various features of Naga ways

Let me just take you throughall these features one by one

It's pretty scalable and secure and manageable as well

It has a good log in database system

It automatically sends alerts which we just saw it

It takesnetwork errors and server crashes

It has easy writing plug-in

You can write your ownplugins right based on

Requirement yours business need then you can monitor your businessprocess and it infrastructure with a single pass guys issues can be fixed automatically

If you have configured in such a way then definitely you can fix those issues automaticallyand it also has support for implementing redundant monitoring posts

So I hope you are understoodthese features there are many more but these are the pretty attractive features and whyand argue s is so popular is because of these features, let us now discuss the architectureof nagios in detail

So basically now argue is has a server agent architecture right nowusually in a network an argue a server is running on a host which we just saw in theprevious diagram, right? So consider this as my host

So now I guess server is runningon a host and plugins interact with local and remote Hood

So here we have plugins

So these will interact with the local resources or services and these will also interact withthe remote resources or services or host right


These plugins will send the informationto the scheduler which will display that in the GUI right now

Let me repeat it


Nargis is build on a circuit

Good Agent architecture

Right and usually in argue is server is runningon a host and these plugins will interact with the local host or services or even theremote host Services

Right? And these plugins will send the information to the schedulernagios process scheduler, which will then display it on the web interface and if somethinggoes wrong the concern teams will be notified Via SMS or through email, right? So I thinkwe have covered quite a lot of theory

So let me just go ahead and open my centralizedvirtual machine where I've already installed now

Gos, so let me just open my Center asvirtual machine first

So this is my Center is virtual machine guys

And this is how thenagios dashboard looks like

I'm running it at Port 8000

You can run it wherever youwant to explain that in the installation video how you can install it now

If you noticethere are a lot of options on the left hand side you can you know, go ahead and play aroundwith it

You'll get a better idea

But let me just focus on few important ones

So herewe have a map option here, right? If you click on that, then you can see that you have alocal host and you have a remote host as well

My nagas process is monitoring both the localhost and the remote host the remote host is currently down

That's why you see it likethis when I will be running it'll be showing you how it basically looks like now if I goahead and click on host

You will see all the hoes that I'm currently monitoring somemonitoring edureka and Local Host said Eureka is basically a remote server and Local Hostis currently on which my Onaga server is running right? So obviously it is up at the otherserver is down

If I click on Services, you can see that these are the services that I'mmonitoring for my remote host our monitoring CPU load ping and SSH and for my Local Host

I'm watching current load current users HTTP paying root partition SSH swap usage in totalprocesses

You can add as many services as you want

All you have to do is change thehost dot CFG file, which I'm going to show you later

But for now, let us go back toour slides will continue from there

So let me just give you a small recap of what allthings we have discussed

So we first saw why we need continuous monitoring

We sawvarious reasons why Industries need continuous monitoring and how it is different from thetraditional monitoring systems

Then we saw what is exactly continuous monitoring andwhat are the various phases involved in implementing a continuous monitoring strategy

Then wesaw what are the various continuous monitoring tools available in the market and we focuson argue as we saw what is not gue base how it works? What is its architecture right

Now we're going to talk about something called is n RP e nagios remote plug-in executor ofwhich is basically used for monitoring remote Linux or Unix machines

So it'll allow youto execute nagios plugins on those remote machines

Now the main reason for doing thisis to allow nog you wish to monitor local resources, you know, like CPU load memoryusage Etc on remote machines now since these public resources are not usually exposed toexternal machines and agent like NRP must be installed on the remote Linux or Unix machines

So even I have installed that in my Center ice box, that's why I was able to monitorthe remote Linux host that I'm talking about


If you check out my nagas installationvideo, I have also explained how you can install NRP now if you notice the diagram here, sowhat we have is basically the Jake underscore n RP plug-in residing on the local monitoringmachine

This is your local monitoring machine, which we just saw right? So this is wheremine argue our server is now the Czech underscore in RP plug-in resides in a local monitoringmachine where you're not arguing over is right

So the one which we saw is basically my localmachine or you can say where my Naga server is, right? So this check underscoring RP plug-inresides on that particular machine now this NRP Daemon which you can see in the diagramruns on remote machine the remote Linux or Unix machine which in my case was edurekaif you remember and since I didn't start that machine so it was down right so that NRP Damonwill run on that particular machine now, there is a secure socket layer SSL connection betweenmonitoring host and the remote host you can see it in the diagram as well the SSL connection,right? So what it is doing it is checking the disk space load HTTP FTP remote serviceson the other host site then these are local resources and services

So basically thisis how an RP Works guys

Do you have and check underscore an Plug in designing in the hostmachine

You have NRP Daemon running on the remote machine

There's an SSL connection,right? Yeah, you have SSL connection and this NRP plug-in basically helps us to monitorthat remote machine

That's how it works

Let's look at one very interesting case study

This is from bitten attics

And I found it on the nagios website itself

So if you wantto check out go ahead and check out their website as well

They have pretty cool casestudies the power from Internet Explorer

So there are a lot of other case studies ontheir website

So bit etics provides basically Outsource it management and Consulting tononprofit or small to medium businesses right now bitnet has got a project where they weresupposed to monitor an online store for an e-commerce retailer with a billion dollarannual revenue, which is huge guys

Now, it was not only supposed to you know monitorthe store but it also needed to ensure that the cart and the checkout functionality isworking fine and was also supposed to check for website deformation and notify the necessarystaff if anything went wrong right seems like an easy task but let us see what are the Problemsthat bitnet X phase now bitnet X hit a roadblock upon realizing that the clients data centerwas located in New Jersey more than 500 miles away from their staff in New York, right?There was a distance of 500 miles between their their staff is located and the datacenter

Now, let us see what are the problems they face because of this now the two areasneeded a unique but at the same time a comprehensive monitoring for their Dev test and prod environmentof the same platform, right and the next challenge was monitoring would be hampered by the firewallrestrictions between different applications sites functions Etc

So I think you have alot of you know about this firewalls is basically sometimes can be a nightmare right apart fromthat most of the notification that were sent to the client what ignored because mostlythose are false positive, right? So the client didn't bother to even check those notificationsnow, what was the solution? So the first solution the thought is adding SSH firewall rules forNetwork Operation Center personnel and Equipment second is analyzing web pages to see if there'sany problem with Occurrences the third and the very important point was converting notificationto nag, uh alerts and the problem that we saw a false positive was completely removedwith this escalation logic

We're converting not as notifications of Nargis alerts andescalations with specific time periods for different groups, right? I hope you are gettingmy point here now configuring event handlers to restart Services before notification, whichwas basically a fixed for 90% of the issues and using nagios core and multiple serversat the NOC facility and each Target is worker was deployed at the application Level withdirect access to the host

So whatever bag is worker or agent or remote machine we havewas deployed at the application Level and had the direct access to the host or the masterwhatever you want to call it and they have implemented the same architecture for productionquality assurance staging and development environments

Now, let's see what was theresult now because of this there was a dramatic reduction in notifications

Thanks to theevent handlers new configuration

Then there was an increase in up time from 85% Early298 personally, which is significant guys, right then they saw a dramatic reduction infalse positive because if the escalation is logic that I was just talking about then fourthpoint is estimating the need to log into multiple boxes and change configuration file

Thanksto nagas configuration maintained in a central repository and post automatically to appropriateservice fourth point is estimating the need to log into multiple boxes and change theconfiguration files and that happens because the inauguration configuration maintainedin a central repository or essential master and can be pushed automatically to all theseslaves to all the servers are slaves are agents whatever you want to call it

So this wasa result of using nog u


Right now is the time to check out a demo where what I'll bedoing is I'll be monitoring couple of services actually more than a couple of services offerremote Linux machine through mine argue Ace hose which I just showed you right? So fromthere, I'll be monitoring a remote Linux host Caldera Rekha, and I'll be monitoring like34 Services you can have whatever you want and let me just show you watch the processonce you have installed

I guess what you need to do in order to make sure that youhave remote host or a remote machine being monitored by your nagios host

Now in orderto execute this demo, which I'm going to show you

You must have lamp stack on your system

Right Linux Apache MySQL and PHP and I'm going to use Center West 7 here

Let me just quicklyopen my Center as virtual machine and we'll proceed from there

So guys, this is my sentto us virtualbox where I've already installed argue as I've told you earlier as well inthis is where mine argue is host is running or you can see the NOG your server is runningand you can see the dashboard in front of your screen as well

Right? So let me justquickly open the terminal first me clear the screen

So let me just show you where I'veinstalled argue is that this is the path right? If you notice in front of your screen, it'sin user local Nagi OS what I can do is just clear the screen and I'll show you what ourlaw directories are inside this so we can go inside this Etsy directory

And insidethis I'm going to go inside the objects directory, right? So why I'm doing this is basicallyif I want to add any command for example Ample I want to add the check underscore n RP command

That's how I'm going to monitor my remote Linux host if you remember in the diagram,right? So that's what I'm going to do

I'm going to add that particular command

I'vealready done that

So let me just show you how it looks so just type generator you canchoose whatever editor that you like and go inside the commands dot CFG file and let mejust open it

So these are the various commands that I was talking about

Now, you can justhave a look at all these commands

This is to basically notify host a by email if anythinggoes down anything goes wrong in the host

This is for service

Basically it'll notifyif there's any problem with the service through email

This will check if my host machineis alive

I mean, is it up and running now this command is basically to check the diskspace like the local disk, then load rights

You can see all of these things here swapFTP

So I've added these commands and you can have a look at all of these commands whichI've mentioned here and the last command you see is I've added manually because all thesecommands once you install your get it by default, but the IP take underscore n RP which I'mhighlighting right now with my cursor is something which I have added in order to make sure thatI will monitor the remote clinics horse

Now, let me just go ahead and save this right

Let me clear my screen again and I'll go back to my nagios directory

Let me share my screenagain now, basically what this will do is this will allow you to use a check and thescore an RP command in you're not give service definitions right


What we need to dois update the NRP configuration file

So use your favorite editor and open NR P dot c fg which you will find in this particular directory itself

So all I have to do is first I'llhit LS and then I can just check out the set C directory

Now if you notice there is anNR P dot CFG file, right? I've already added it

So I'll just go ahead and show you whatthe help of G edit or you can use whatever editor that you prefer now over here

Youneed to find this allowed host directive and add the private IP address of your Nas deviceover to the gamma delimited list is Scroll down you will find something all allowed host

Right? So just add a comma and start with the IP address of the machine that you wantto monitor So currently let me just open it once more

So I'm going to use sudo becauseI don't have the Privileges now in this allowed host directory

All I have to do is commaand the IP address of the host said I want to monitor so it is one

Ninety two dot onesixty eight dot 1


Just go ahead save it come back clear the terminal now save andexit

Now this configures in RP to accept requests from your Nas device over why it'sprivate IP address, right and then just go ahead and restart NRP to put the changes intoeffect now on you and argue server

You need to create a configuration file for each ofthe remote host that you monitor as I was mentioning before is well now where you'regoing to find it in HC servers directory and let me just go ahead and open that for you

Let me go to the server's directory

Now if you notice here, there is a deer a card orCFG file

This is basically the host

We'll be monitoring right now

If I go ahead andshow you what I have written here is basically first what I have done is I have defined thehost

It's basically a Linux server and the name of that

So what is Eddie raker allies?Whatever you want to give this is the IP address maximum check attempts the periods

I wantto check it 24/7 notification interval is what I have mentioned here and notificationperiod so this is basically about all my host now in that hose what all services are goingto monitor our new monitor generic services, like pink then I want to monitor SSH thenI'm going to monitor CPU load is when these are the three services that I'll be monitoringand you can find that in your side C

So was that a tree over there? You have to createa proper configuration file for all of the hose that you want to monitor Let Me ClearMy terminal again the just to show you

My remote machine is well, let me just open that

So this is my remote machine guys over here

I've already installed NRP so over here, I'mjust going to show you how you can restart an RP systemctl restart

And rpe service andhere we go the asking for the password

I've given that a man not a piece of its has startedactually have restarted again

I've already started it before as well

Let me just showyou how my nagios dashboard looks like in my server


This is my dashboard again

If I go to my host tab, you can see that we are monitoring to host a dinner a kind localhost

Erica is the one which I just showed you which is up and running right? I can go ahead andcheck out this map Legacy map viewer as well which basically tells me that my a directas remote host then also I have various sources that are monitoring

So if you remember Iwas monitoring CPU load ping and SSH which you can see it over here as well

Right? Sothis is all it for today's session

I hope you guys have enjoyed listening to this video

If you have any questions, you can go ahead and mention that in the comment section

Andif you're looking to gain hands-on experience and devops, you can go ahead and check outour website www


com / devops

You can view upcoming patches and enroll for theThat will set you on the path of becoming a successful devops engineer, and if you'restill curious to know more about the divorce roles and responsibilities, you can checkout the videos mentioned in the description

Thank you and happy learning