Mastering Bitbucket Pipelines for Continuous Integration and Continuous Deployment | Chris Frewin | Skillshare
Search

Playback Speed


  • 0.5x
  • 1x (Normal)
  • 1.25x
  • 1.5x
  • 2x

Mastering Bitbucket Pipelines for Continuous Integration and Continuous Deployment

teacher avatar Chris Frewin, Full Stack Software Engineer

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Watch this class and thousands more

Get unlimited access to every class
Taught by industry leaders & working professionals
Topics include illustration, design, photography, and more

Lessons in This Class

    • 1.

      Introduction

      2:27

    • 2.

      Environment Setup

      4:18

    • 3.

      Creating Your First Pipeline

      6:36

    • 4.

      Using SCP to Transport Artifacts from the Build

      15:10

    • 5.

      Using SSH to Run Commands on the Server

      10:12

    • 6.

      Creating a Slack Messaging Bot to Add Messaging Functionality

      15:19

    • 7.

      Bootstrapping with Create React App

      4:37

    • 8.

      Defining Environment Files and Using it in a Component

      4:25

    • 9.

      Utilizing the Environments in a Bitbucket Pipeline File

      3:34

    • 10.

      Refactoring the ngrok server for the React TypeScript Project

      5:22

    • 11.

      Refactoring the Pipeline for the React TypeScript Project

      10:53

    • 12.

      Utilizing Tarballs to Transfer the node modules Folder Faster

      5:24

    • 13.

      Setting forever to 'watch' Mode on Server

      4:29

    • 14.

      The Big Advantage of Pipelines Testing in Staging, Merging to Master!

      2:49

    • 15.

      BONUS: NGINX Proxy Configuration and React PUBLIC URL Environment Variable

      6:11

    • 16.

      BONUS: Using Docker as an Alternative to a Remote Linux Server

      9:55

  • --
  • Beginner level
  • Intermediate level
  • Advanced level
  • All levels

Community Generated

The level is determined by a majority opinion of students who have reviewed this class. The teacher's recommendation is shown until at least 5 student responses are collected.

177

Students

--

Project

About This Class

Bitbucket Pipelines brings continuous integration and delivery to Bitbucket Cloud, empowering individuals and teams to build, test, and deploy their code using the Bitbucket cloud!

It's a tool that has saved me an immeasurable amount of time over the years, and it's something I use for almost every repository I have in my  BitBucket account.

Before working with pipelines directly, we'll ensure we're all using the same version of Node using nvm.

We'll then create YOUR very first Bitbucket Pipeline, defining a bitbucket-pipelines.yml file.

We'll add complexity to the pipeline, learning how to use SCP () and SSH () within our pipeline.

We'll get even more complex, learning how to use various environments, such as develop, staging, and master branches, in our pipeline.

We'll even learn how to use a Slack messaging bot to send messages during various points during the build!

The course ends with a few goodies and bonuses, like how to get the setup working via an NGINX proxy.

***NOTE: If you do not have an external target server to use, bonus lesson 16 shows you how you can use a local instance of Linux using docker, and how to add a port mapping to your local router so you can access that box from anywhere in the world!

Meet Your Teacher

Teacher Profile Image

Chris Frewin

Full Stack Software Engineer

Teacher

Hi everyone!

I've been a professional full stack software engineer for 7+ years, and I've been programming for many more. In 2014, I earned two separate degrees from Clarkson University: Mechanical Engineering and Physics. I continued at Cornell for my M.S. Degree in Mechanical Engineering. My thesis at Cornell was a technical software project where I first learned Bash and used a unique stack of Perl and Fortran, producing a publication in the scientific journal Combustion and Flame: "A novel atom tracking algorithm for the analysis of complex chemical kinetic networks".

After opening up my first terminal while at Cornell, I fell in love with software engineering and have since learned a variety of frameworks, databases, languages, and design patterns, including TypeScrip... See full profile

Level: All Levels

Class Ratings

Expectations Met?
    Exceeded!
  • 0%
  • Yes
  • 0%
  • Somewhat
  • 0%
  • Not really
  • 0%

Why Join Skillshare?

Take award-winning Skillshare Original Classes

Each class has short lessons, hands-on projects

Your membership supports Skillshare teachers

Learn From Anywhere

Take classes on the go with the Skillshare app. Stream or download to watch on the plane, the subway, or wherever you learn best.

Transcripts

1. Introduction: Hi everybody. This course is all about mastering Bitbucket pipelines for continuous integration and continuous deployment. I was looking around on skill share and I realized there were no courses specifically for Bitbucket pipelines. You pocket pipelines is to I basically use every day as part of my development workflow. And now that I've been using it for a while and basically hooked, it saves me a lot of time from stuff that I used to do manually. And so I'm hoping I can share with you my experience and everything I've learned along the way. So u2 can benefit from saving a lot of time in automating things like your tests, your deployment, and your build processes. So let's take a look at what we're going to go over in this course. Before working with pipelines directly, we'll set up our environment and NodeJS and get started by using a simple boilerplate repository, will then create a very basic Bitbucket pipelines YAML file. To start our very first build, will add complexity to our billed by using some custom steps to build the site source and then use sep to deploy what are known as artifacts to our server of choice will get even more complex, adding SSH functionality and access keys so we can do some automated work on the server side, will branch the Git repository into a develop, staging in Master System and learn how to dynamically use variables in each branch will also integrate custom Slack notifications and even attempt some self-healing mechanisms for when things go wrong with the deploy. From there, the sky's the limit for what you wanna do with your own builds, deploys, and tests. I hope the course looks interesting to you, and I hope you join us in less than one, where we'll see how quick and easy it is to get started with pipelines. I also want to mention that this course is targeted more at intermediate and higher developers. So developers who already have some experience with Git repositories and bash scripting. We will start off simple and get more complex, but I just want to let you know that. So I think we're going to have a lot of fun on this course and hope to see you there. 2. Environment Setup: Before we get started coding, we should make sure we're all using the same tools. Since we'll be using a bit of node, let's make sure that our node versions are the same. So if I check MyNode version, I'm using the latest of version ten, which is 10.19. And npm is also here, 6.13.4. If you don't have this version, you can install nvm. And then you would run nvm install v 10.19. My case, of course I have already installed, but that will install both node and NPM for you. That's kind of outside the scope of this course. But if you go to the NVM GitHub page, you can go down to installing and updating. They have a bash script that will get you the latest version. If you're using Windows, you'll have to install node from source, the 10.19 version. So for this course, I've already prepared a simple git repository that we can use as an example for building our first pipeline. So we can clone that now. And I'll put this link in the lesson notes. We'll cd into that folder and open it up in Visual Studio code. So this looks like there's a lot of stuff here. But really all it is is a copy of HTML5 boilerplate, which is a nice bare-bones template where you can start prototyping and building a wireframe for a website without having to worry about picking a framework like React or view or whatever. Basically if you know, normalized CSS, it has that in there and a few other goodies that basically unifies the styles if you visit the page on any browser. So when we download this, we see we just get a zip file. And when we unpack those, we get the same folders that I have in my project. So if we go back over to the project, we see the same type of structure. The only thing that I've added here, if we go into index, I've added bootstrap for some styles. And then from here, this is, this is from the boilerplate, add your site or application cotton here, all the way down to the Google analytics tag, which also isn't filled out. But this is just some simple content. And we can open that in the browser. And we see we have a basic site to work with. No JavaScript, no front end framework. And it's somewhat responsive. I'm sure you guys can find something that isn't perfectly responsive here, but it's good enough to start with. And it'll suit its purpose for what we're going to do. So in this lesson, the first thing we did was ensured that we're all using the same version of Node, reusing node version 10.19. Then we downloaded a simple example git repository, which we'll use throughout the course. As we saw, it's a simple copy of HTML5 boilerplate. And I added some markup to the index HTML and also to style sheets for one from bootstrap and one from my own. So in the next lesson, we'll get started and finally build our pipeline. 3. Creating Your First Pipeline: So the first thing we'll need to do is create a Bitbucket repository, although we clone the initial repository from GitHub, will have to create a repository here and then transfer the origin URL of the repository to the created repository here in Bitbucket. So if you don't have a Bitbucket account, you can create one now, they're free. And then once you have your dashboard overview page, you can go to Create repository. And I'm gonna take the same name as the one from GitHub. And we created. So we have our URL. Note that it will be different based on your username. And we want to copy everything, but not including the git clone part of the command. We just want the URL. So we can check the current URL of the repository we just cloned with git remote dash V. And we see that GitHub URL. We're going to update it with our Bitbucket URL that was just created. That is, you can check that it works by issuing the same git remote command. So now that our remote is set, we can finally create our pipeline file. And that file has to be named bucket pipelines. Yml. As far as I know, a pipeline file always begins with the command pipelines. Then you define branches. As we'll see in further lessons in the course, it's helpful to have different pipelines based on which branch you're on. For example, you may have a testing or staging environment and then you have your live or production environment. And the pipeline will have to be run differently based on those branches. So if some of you dug into the repository, you may have noticed that I already named the initial branch that we cloned lesson one. And you can verify that by running git branch to S V. And you can see whether on less than one. So in our pipeline file, we want to note that with less than one. Now we can add what is known as a step. And we can add a name for our step. And a script. Now in pipelines, bash is always available to you. So for now, we'll just do a simple echo. Hello world. Save that file, will add everything. Git commit with a message. Just to test pipeline. And we'll push that. Great. On the sides repository. If you go to the pipeline's tab and you scroll down, because it was the first time we pushed a bit bucket pipelines YAML file, we have this sort of view. In our case, we've configured it correctly. And so Bitbucket is telling us the configuration looks good. And we can fire it off here. If you've missed a command or an indentation, it might look something like this. It says it's invalid. And you'll have a yellow dot where you can hover. So here it says bad indentation, but sometimes it can say unknown command or something's unrecognized. Really 99% of the time it's because of indentation. And actually, there's a really nice tool that you can validate. If you're really stumped. You can continually copy and paste your pipelines file into here until it says it's invalid. So if we fix this just to show as an example, valid. And again, these are invalid. So you can pull around here with your pipeline until it's valid. And you can always copy that and paste it into here. Alright, so let's enable pipelines. And right away, our very first build is firing off. Great. So it was successful. The really nice thing with pipelines is that you get a log of each step that occurred. These are built-in the setup and teardown. However, we see that this is our custom command. We have both the actual command in the pipeline and the result of that command. So we'll see how this is helpful later in debugging when there are errors in the pipeline or something goes wrong on the server. So in this lesson, we created our first pipeline file with just a simple echo. We initialize that through the pipelines tab. We saw the logging events. And we also saw how you can validate your Bitbucket pipelines with this Bitbucket pipelines validator, which I'll post in the lesson notes. Now that our pipeline is hooked up to bitbucket, we can start building more complex examples and we'll move into that in the next lesson. 4. Using SCP to Transport Artifacts from the Build: So now we'll look at how to deploy what are called artifacts in our Bitbucket pipeline. And artifacts are really nothing more than specially defined files and folders which Bitbucket knows to keep a copy of and which you can manipulate later. So really there are two main steps in this sep process. The first one is to actually produce the artifacts and tell Bitbucket that those are the artifacts. And the second is to actually copy those artifacts to your target server. In this case, I'll be using a Digital Ocean droplet, which is just a remote Linux box. So artifacts are especially useful in build processes which only produce static content. For example, Create React app or a type script build. And they're also useful for when the build is particularly computationally heavy or slow. I have to use it a lot, for example, because the droplet I mentioned is the smallest size. And I find that trying to run build commands directly on the server can result in timeouts and the server just not having enough memory to do the build. So before we write any commands in the pipeline, we want to create a new branch called Lesson two. If you want to follow along, you can create the branch now. Or if you want to just skip to the final working code, you can simply switch to that branch from the GitHub clone version that has the full source. So we'll do that now. Git checkout lesson. Great. So we can see branches just to double-check or on less than two. Okay? And again, we're working with this static HTML5 boilerplate code base. So it can be served just by serving index HTML by itself. And to simulate a kind of fake build process, we're going to define a folder called dist. And like with any framework where you would have a disk or build folder, for example, with a type script build or Create React app, you want to ignore that file. In get. We don't want to be committing the produced files into our repository. So we'll also create a new gitignore file and simply add dist so we can head back to our pipeline. And now that we're in the lesson to branch, we need to define that the branch name and define our artifacts step. So I'm going to call it name like somewhat fake build process. Just to simulate again that we're, we're moving some files into our dist folder, which are the actual build. Files, because we're just simulating this bill. All we have to do is copy the entire codebase into this dist folder. So at first, you might think to use Cp as a simple bash command to do that. But we don't want to actually copy the dist folder itself into dist. And then you'll, you'll get into a basically a recursive loop and it won't work. So there's a nice alternative to use our sink where you can avoid that recursive loop. And so what we can do is our sync dash AB, get the progress bar. And this is the source, so the project root. And we want to copy it into dist. But we want to exclude this. So that command should do it. And then we need to of course define E artifacts, which will be dist and all the files and subfolders. So double star. So typically in this script step in a, using a, a framework like Node or type script, you would run Node install and node build, node test. Those commands would sit there. And then you would still define your artifacts. And you can also define multiple artifacts If you have more than just a single directory. For example, if you had some other some other output TXT file, for example, you would list them continually after this artifacts identifier. And then of course there is a second step which is actually deploying that disk folder to the server. So and we'll name that. Oops. And the script here is a bit special. At last scene has a built in SCP in which you can use that is this. And then you also need to define the variables for this scp command. Whoops. So you need a user server. The remote path, which I will put in the standard Linux web source under a folder called Skills share SEP test. And the local path is relative to this codebase or the repository itself, which is dist. And we also want all files and folders. So these two steps should get us going with SEP. The only thing missing now is that we actually have to setup as CP, so it will work. So there's two things there. We first need to define the target user and server for where we want to deploy to. And we also need to set up the SSH keys. So Bitbucket has access to our server. So heading over to our repositories page, we wanna go into repository settings. And first we'll add those Repository variables, the user and server. And I should also mention that this whenever you have a dollar sign with a variable name, that's telling Bitbucket to look in these key value pairs of the repository variables. So for now we only need the user and server, but you can theoretically add as many as you need. And of course, these are very helpful for, for, for things like this, where security and privacy is important and you don't want to hard code these values directly into your pipeline. So for my setup, I am root. You can have secured in the server. I will also paste here, and I will also make it secured. Okay, we have our server and user variables, and now we need to set up the SSH key. The easiest way to do that is just have bucket generate the keys for you. So we'll click that. And bitbucket creates a private public key pair. And they even tell you exactly what to do. Copy this public key to SSH authorized keys on the remote host. So we'll do that now. So I have a alias to login directly to my droplet. I don't want to reveal my the epi and access there, but at least you see here you can see root that was our user. And we are going to go to that path. And we'll open up the authorized keys file. So I already have a few public keys here, some are for, so I already have a few public keys here, some of which are actually for other Bitbucket pipeline projects. So I already have a few public keys here, some of which are already being used for different build pipelines in different repositories. But we'll paste ours in here right now. Just press Enter. Looks, okay. Exit with control X and save. Same File. Okay, great. So it's also a good idea to fetch and validate the known hosts fingerprint. So I'll add mine is the same IP or URL that you would use in the variables, the repository variables for the server. And we can fetch the fingerprint. And before we add the host, we can just quickly hop on the server and validate that this is correct. So to do that, again, I'll SSH into my, my Linux box. And to do that, we can write a simple bash for-loop to get the md5 hash for all our public key files. That looks like this. And I'll post this command in the course notes. So we can look. I have this F9 EC and the 9171 F9 EC. Yup. That's our ECDSA MD5 hash. So we can go ahead and add this host. One last thing we need to do, wherever you'll be serving that files from is ensuring that this directory exists. So we'll cd into Var www and create that directory. So we can check that as their perfect. Okay, and that should be enough work for now that we have to do on the remote server. So now that we're back here in our local repository, we should be able to push the code as it is now in branch less than two. And it should execute the steps. So let's try it out. Add our changes. And then we'll commit with a message. First test of SCP and push that. We need to tell the remote that origin should also have less than two branch. So if we hop over to our pipelines, we can see that it was successful. And again, the nice advantage here is we get the same console log. Since we've passed this progress flag with rsync, we see the same log that we would if we run that command locally in the repository. And it also successfully deploy the folder here. So to double-check, let's hop back on our server. And again, we put them in far WW skill share SEP test. And let's just list what we have here. Great. So this build process under less than two branch is working successfully. We're in the cloud, not locally. We're in the cloud copying all these files to a disk folder, which is defined as an artifact, and copying the contents of that folder to this remote path. And there is still one final step we need to do on the server. And that is we have to actually serve these files and check that they're being served properly. Once we do that, we'll have a full, albeit simple continuous integration build. So in the next lesson, we'll look at how we can SSH onto the server in our Build Pipeline and run the commands that we need to ensure that index.html is being served. 5. Using SSH to Run Commands on the Server: So in the last lesson, we learned how to deploy artifacts to our production server. We now need to run some additional commands to actually serve the artifacts that we deployed so we can view them on a public URL. So again, the first thing we need to do is branch off into a new lesson, branch, lesson three. And we can actually copy the exact steps from Lesson two, the first two steps. And we'll be adding our third step. So we want to serve this static content. And ideally, you would have a server running constantly, for example, an Express server. And then you would set up a proxy using a tool like Apache or engine X to expose that. We'll get to those types of things in future lessons. But for now we just want to show how to issue SSH commands to be able to do this. And two tools which make this quite easy are MPM serve handler and n Grob packages. And together there'll be able to serve this static content and also expose it to the internet so we can view our site on a public URL. So I've already put together a simple index.js file, which I'll put in the lesson notes. As it's kind of outside the scope of this project. So we'll created here, and I'll just copy and paste that in here. Basically, what's happening is we start up a server using the serve handler middleware. And for those interested, serve handler is the core of serve, which is the simple recommended way you can serve. For example, a Create React app build. Just a simple CLI where you say serve folder name in that folder is served, but this is what's happening in the background. That's a middleware for the standard Node HTTP package. And then I step two, I connect n grok to that port and I kept 5 thousand as Serve. The CLI command serve uses 5 thousand as a default. So I've just kept that there's no special reason. And end grok will expose the local port 5 thousand to the internet. So with our index.js set up and is ready to run, we can now define the third and final step for this build process. So again, omega right step. And I'll give it a fairly explicit name. Statically serve the server handler and expose it to the internet via anger. Ok. And it will again be having a script here. And in the last lesson, since we already defined user and server, we can of course use them in this step as well for our SSH, just SSH user at server. And then we can think about the command we need to issue. So first we need to get to the root of where our static content is. Skill share SCP test. And we'll do And, and which means in series. So that means bash ensures that this command completes and then goes on to the next. Whereas a single would be in parallel. But CD is important for us to be in the directory first before we issue this next command, which is npm install. And we'll install the two tools that we need to run, our index.js, which is serve, handler and grok. And again, that's something we want to do in series. We want to wait for it to complete. And then we can run node index.js. We can save that. And if we look at index.js, we can see I put some console logs here. So we'll be able to see that in the bitbucket UI. And at the end of the day, we'll get a n grok URL and we should be able to visit that URL. So is something some sort of hash. And then n gotta I0. So we'll see that in the pocket UI and we'll be able to visit that and see our site. Hopefully, we will push this. Now. Say we are now able to issue commands via SSH. And we will push. Great. And so we see that the drop tunnel is exposed here. So we can just click that. And great, and we have our site. So while this final third step in our continuous deployment process works, there's a slight hiccup in that node. Index.js never give spit bucket a signal the end. And so the build process never realizes that it completes. And that's of course bad if you're having a limit of minutes in the Bitbucket build environment. So we're going to introduce a new tool which can run these process in the background and that is forever. So we'll simply add that to our list of packages to install locally. And instead of using Node, we issue forever start index.js. But editing this command, we notice there's kind of a different problem. And that is as soon as your application gets growing and you have to do more complex commands and perhaps even consider different environments. We can't be writing an inline command and to our Build Pipeline forever. And so a nice way to refactor this is to collect this script into a bash script. So I will take this and we can create a bash script called deploy. Paste that in. And we can put these on multiple lines for readability. Saving that. Now instead of that large chain of commands, we can simply run our bash script. But we need to remember that this bash script will be moved into the dist folder, which becomes the root static folder. So we first need to cd again into this skill shares CP test folder. And then we can call in Bash deploy SH. So we'll add that factoring SSH commands into deploy script. We'll push. Great. So we do see some output from forever. But it is important to notice that we do lose that and grok info. And it would also be nice if we didn't always have to come to our Bitbucket UI to see the logs from our bill. And so we should look for a nice way of messaging and will do exactly that in the next lesson. Where we'll hook up to a slack bought, where we can message from our build process. 6. Creating a Slack Messaging Bot to Add Messaging Functionality: In the previous lesson, we built our first complete continuous integration build. And towards the end of the lesson, we recollected some Bash commands into a script, which was able to complete a clean exit for Bitbucket. But we saw that we lost some logging information. But in the long run, always looking at the logging information for critical information such as the published URL is a bit cumbersome. And so we want to build a messaging system to access that information more easily and more user-friendly. So we're going to do that by creating a slack bought. So I've already opened up the URL, that's api dot slash.com slash messaging slash web hooks. And the very first step is to create your Slack app. So I'll click on that. And I've already logged in. I have a slack account already. And I will call this skill share. Deploy bop. And I will create that app. And we want to go into incoming web hooks, can activate them. And the very first example here is already in Bash format. So let's add a new web hook so we can get that URL. And I'm going to quickly create a new channel called Skill Share test. I think I need to. We'll add it to the skills shared test channel, click allow. And great, we get an actual web hook URL. So as I said, this is already in Bash format. So we can go ahead and as a first example, go right into our deploy script. And we'll first put this as r. We can change the text to starting CI dot, dot, dot. Make sure to add that. And, and to continue the commands. And we'll also put up the end ci complete. And we don't need this last day and let's save that. And I've already gone ahead and created a lesson for branch. So we need to add that. And the build itself remains exactly the same. We've just hooked into this web hook as a, as a side effect. And so we need to signify lesson for, for this branch. And we should be able to add that. And now that we've pushed, that should fire off the build. And we should see our messages in slack. So we see the starting CI and see I complete. So we've tied into our deploy script. But our original goal was also to get the end grok link from the index.js script. And again, I'm trying to keep this course has focused on pipelines as possible. So I'll just paste in the code here and quickly describe what I did. Essentially, I've written a node JS version of this curl command. We need to set the application JSON headers and pass the JSON data as a key value with text and a string. And of course posted to the given URL. And so I've required the node fetch package and created a wrapper function around it, where all you have to do is pass the string message. So we've got post our application JSON header. And for the body we stringify with the Key Value, Text and message. And to illustrate that I've set up the function alongside of the console.log. In reality, we can probably remove these console.log lines or write to a local log file on the server. But for now I'll just leave it. So we should see as a result still these two messages. And then in addition. The running at local host 5 thousand and n grok tunnel. And we'll finally get our URL on slack in because I've added node fetch, we will have to add that to our deploy script as a package. And that should do it. And we can see that we're getting those what were previously console logs now a slack messages. And even though we pass this, just as a normal string to slack, slack is smart enough to recognize these are URLs and formats them for us. You can even do other things like pass emojis and do some simple formatting like bold and italic. But I'll leave that to you to look in the docks. But let's check our link to see if it's working. And perfect. Site is up. For those of you who have been coding along so far. You may have noticed there's one small issue with our deploy script and our index.js. The problem is that this script will start a node process listening at 5 thousand. And when we issue forever start index.js forever will try to start a new instance of node on this script. And that will actually crash because there's already a process at 5 thousand. And so in this case, forever has a nice command instead of start to use restart. And the nice thing here is that it will check to see if a process in this folder with index.js is already running. If not, it starts it, but if it is running, it will restart it. So running forever restart. We'll fix that issue for one final clean up here. We should also move this URL to an environment variable. Not too much of a security risk, but if someone gets a hold of this link, they can spam your slack channel, which would be a bit annoying. So in the case of Node, we can remove this. And this would become something like processed dot n dot Slack web URL. And we should also do the equivalent in the deploy script. So these two, I will rename Slack, web hook URL. And of course this syntax. Brings us back to the repository variables. So we can add that Slack web hook URL and paste that in and also make it secured. And there's one final step we need to do. So the repository variable isn't directly usable here in this script, but rather it's available in the bitbucket pipelines dot Gamow file. So we can pass it actually as an argument. And when it's passed into the script. As the bash standard is. The first parameter is dollar sign one. And so to use the same name in this script as well as in the process of node. We can export a variable of the same name to be equivalent to that perimeter. And because Forever has itself its own environment variables, we also need to pass this same command in front of the forever call. And this syntax also won't work with restart. It has to be with start. And so we'll do a pair will do first Forever, stop index.js and we'll restart with the correct parameter. Now there's one last thing for us to do. And that is since we're passing the repository variable in this SSH string, in order for this to be evaluated correctly to the actual variable, we need to use double quotes. And that should do it. And we can see that the built completed. And we've successfully refactor that. And it works exactly the same. Let's check this newest message. Great, that's our static site. And to be sure that there's no other separate process running, which we wouldn't expect. This should lead to some sort of error. Exactly. So n grok isn't running anymore. At this URL. We successfully, I've created a fresh access to our site at this newest URL. So everything seems to be working. So in this lesson, we refactored our hard-coded flack web hook URL to a repository variable. And we saw that in order for this to be accessible while running on the server, we have to pass that using double quotes in the SSH string. And we also have to, of course set this variable, this key value pair in the repository variables UI on the bitbucket site. Even after passing in this variable, which is dollar sign one, we also had to pass it to forever. And with this type of syntax, the parameters can only be set with forever start. So we do a pair of forever stop and forever start. So we can see that this kind of layout is a little bit complex. And when we're working with a more complex framework like type script, we can take advantage of various n files, such as a development environment, staging environment, and a production environment. To prevent these sort of command pairs, everything will flow very cleanly from an environment JSON file. 7. Bootstrapping with Create React App: In the last lesson, we polished off sending messages at certain points in the pipeline through slack. But we saw how cumbersome it is to deal with just one repository variable. And how difficult it is when we're making our own tooling. And we still haven't moved away from that original HTML five Boilerplate type code. So the pipeline techniques we have used so far are fine and the concepts will work with any project. But we should examine what we can do with a more complex example. Using React and type script. We can leverage the tools those languages have and the tools built around them to make our pipelines even cleaner. And we'll also be finally designing a pipeline with a real build in this lesson. In this case, the npm run build command from Create React app. First for this lesson, I'll be creating three branches to simulate a full CI CD workflow, which are development, staging and master. So let's first create a completely empty Branch. Since we won't be following or taking any code from our HTML5 site. So we will create an orphan branch. And we want to start with a clean slate here. So we're going to remove all of these files. And great, that we have left over, but that's ignored anyway. So I can get rid of that as well. Okay, and we're ready to bootstrap with Create React app. So that's n px. Create React app. Dislocation and great. So the create reactor bootstrap finished. And we can push all this as our initial commit for this new project, so to speak. And we also wanna make sure that our staging and master branches had the same codebase to start with as well. Okay, so develop, Master, and staging all have the same codebase. And keep in mind this setup is a bit special since I want to keep all the course code for you in the same repository in a normal project, master is created by default with git init. And so you would only have to create, develop branch and staging branch. 8. Defining Environment Files and Using it in a Component: So let's hop into some code here. In the last lesson, we were concerned with juggling and passing repository variables around. So to do that in a cleaner way, we can define some environment files for each of these branches we have. So first I'll create an end folder in the source folder. And I'll put in for environment files, standard EN dot JSON and, and develop dot JSON staging of Jason and Esther. And as a simple starting example, we will put a site name key in each JSON. So for the develop, say something like think of this as a title. Just for an example to start with. Staging would be like and also call it the testing site. In the master would be the production site. And for now we'll take the development JSON as our end JSON. So we have these various environments, but it'll BB, the normal dot n dot JSON that we would use throughout our components in the rest of our app. And so, for example, if we happen to the app TSX file, I can import that JSON file, just call it n. And the benefit with Visual Studio Code and Type Script is that type script will know what you have on your AnyObject. So you can do something like it already knows site name is a property there. So to use our developer environment locally, we can add it to the pre start command in package.json. And this is a npm naming convention, which means before firing the start command, run whatever is in the restart command. And for that, we simply want to, this is bash. We want to copy pn dot JSON to the main file. And now if we fire up npm start, pre start will fire. It will copy this developed into N, which we've, we've done manually already. But we should see in in the title, the development local site. So let's give that a shot. Great. So my title is development local site. 9. Utilizing the Environments in a Bitbucket Pipeline File: Okay, but we're not here to write package.json scripts. We're here to learn about bitbucket pipelines. So let's incorporate these environments into our pipeline. Since we're on a totally new slate, we need to create a new bucket pipelines file. And all started like we did the other one in the original example. And here normally you would have your staging and master. And then you would have your commands within. But because we're using this special lesson based setup, we have these branch names. So just like we added to the pre start command, which is for us for local development. We wanna do the exact same thing, but with the staging in master environments respectively. So we'll say copy staging environment too. Jason. And same script, just different file. And staging that Jason to source and JSON. And same thing here for Master and just going to copy paste and take that master file. So since the variables are flowing from this end file, it's important that this is the first step or one of the first steps. If you're doing other things. Before we do any built, we want our variables to be set correctly before building or testing or things like that. So now let's actually define the build an artifact steps. Essentially the same concepts as with our simple static site. But it's a bit more complex because we also want to ship our Node modules as well as the Build folder that react Create, React app creates. So I'm actually going to paste right in the steps that we defined before in refactor them for this Create React app project. So instead of a fake build process, we're going to say a real build process. And that script is simply npm run build. And we have a few are artifacts here. We'll have our build folder, which is default that Create React app has. And the MPM modules folder. And also which we haven't created yet, an index JS server. 10. Refactoring the ngrok server for the React TypeScript Project: And this index JS server, I'm actually going to borrow as well from the HTML5 static site example. Just because I think n grok makes it very easy to get your site on a public URL very quickly and it's great for prototyping. So I'm going to create an index.js here as our server file. And I'm also going to copy that, copy and paste the example in. So we can already leverage our environments that we've created here. Instead of defining a fixed port. That's something we could also add to our environment JSON. So I am going to replace that everywhere with nth dot port. And of course we have to import that. And we should of course add the nth key. We can start at 5,001 for develop and then just go up. Again. We'll just keep that master is $5.3 staging 502. And we can even, you can also add as a hint. And these console logs. This one other modification before we were using serve handler directly in the working directory. But how we're going to define our pipeline, we're going to put this server file alongside of the build directory that gets created. And so there's a third parameter here, which is a an options object. And that option is the public option, which is build. And there's one more thing we can add as well, again, leveraging these environment JSON files. And that is this. We have the advantage now. If we wanted to create multiple slack URLs for each of our branches, for example, if you wanted to bought for staging and you wanted a bot for master, which you could put in separate channels. You could do that with this environment setup. And so instead of passing the actual value, we pass the key of the environment variable two, the process dot n. And we call this the key to signify that this is the name of the key. So for example, you would have for the develop, or perhaps you, you may not even have one in develop. Can leave that empty. Copy that as well. In master. You may call it, Let's call it Skill Share, master workbook. And then for staging, staging, Slack web boat. In this way, you could have multiple URLs or keep the same key name and just keep the one that we've created already. I'm going to save all those. 11. Refactoring the Pipeline for the React TypeScript Project: Let's finally back into the pipeline. So we've defined our artifacts. We've refactored index.js to work with the environment variables we have. And now we have to deploy each of these artifacts with SEP. And it's important that we have a different folder here. So now I'll call this, will have a type script. I'll call it Skill Share type script staging. And the build we want to actually put in the build folder. Build folder. In NPM modules. Let's find mistake there no Modules is also live in the same folder name. And the index.js. We'll just go in the root of this folder on the server. And that should do it. We can copy these four steps and use them for a master as well. The only change here is we'll have a different folder called an ester. So I first want to hop onto my server and ensure those folders exist. So I'm gonna make the staging folder and also the master folder. And also within each, I need to make the build directory and also the node modules directory. That should do it. I can back out of the server. And the final step here is to add a node image. So before we got away with just using bash scripting, because we just had to do some folder manipulation with those static files. But here we're actually running NPM rebuild in the cloud. So Bitbucket needs to know, well, what node version do you want to use? Otherwise it will use its default. So as I said at the beginning of the course, we are using node 10.19. That is the image directive within the pipelines YAML file. And we can also add in the build process under name a cache's directive and specifying node. And this means that if we've already downloaded this image for Node, and bitbucket sees that, then it will use that image. It won't redownload every time. So this speeds up your build process and saves you build minutes. So I'm also going to add that to the master. And one step which I nearly forgot in the build process is to actually install the NPM modules with npm install. So add that to both the staging and the master branch build. So I also realized I was writing end fear. This should be port as nth dot port, not n dot n. So let me fix that quickly. And I also changed the web hook environment Key to the staging for the develop that just so we can run this locally to illustrate what it will do on the server. We also can't forget to install the dependencies that we need here to run our server. So we will do that now. Save. So we need to serve handler. Http is built-in, so we don't need that. And node fetch. And these will be saved in the package.json. So BitBucket will see the same package.json and install them just the same. And just as a test, we can run the server right now. Great, so we get the console logs. And we should see the same messages in our flack. Yup. This B32 and grok. That should be the same as yeah, exactly, exactly. Because our server requires this environment file, we also need to transfer that. So we can basically copy this. Just another sep. We can just define it as an artifact. If we want to maintain the same structure. And we can do it like that. And the same. Again, for the master. This holder is different. So of course, when we push to less than five develop, nothing will happen with our build because we've only defined a pipeline for the staging and master branch. So let's merge that now and see what happens. We will switch to staging and merge in, develop and push that. So our build process has worked so far. The install and build step takes about three minutes. This will get a bit faster though. As you can see, that Bitbucket is assembling a new cash for node as we specified. But here the scp command for Node modules has been running for a few minutes and that's too long. So in the next lesson, we'll learn how we can speed up copying all these files in the sep process using a tarball. 12. Utilizing Tarballs to Transfer the node modules Folder Faster: So what we can do to improve the SEP speed is actually create a tar ball before transferring it. So I'm going to stop this pipeline. And we're going to head over back to our pipeline file. I'm gonna make sure we're on the correct branch. And we will do here is after the build command, create a tarball of the same name from Node modules. And the artifacts. Now we don't need Node modules, but rather the tar ball that's created. And we will also rename this. Instead of a folder. We just need to hit that directory with this file. And we'll do the same for the master branch. Then to be complete, we have to add an additional step to unpack that tar ball on the server side. And we're again, you're going to use an SSH command to be able to run commands on the server. We'll copy that to the master. Just have to change the folder name. So this should speed up the build process greatly and will also be able to see how much that caching process saves as well. So we can see with this new build process with the tarball, first of all, it finishes most important. But the node modules tarball deploys much, much faster. So it was an indeterminate amount of time. I don't know. I waited about four or five minutes and then stopped it, but now it's only ten seconds. And in addition, we see that bit buckets internal caching process reduce that initial install time. And node ImageDownloader, which was about three minutes to only a minute. So that's also three times faster. Now basically stay that amount of time. It might be a bit longer if you add a few packages to node. And then it would also be the very long time if you changed or upgraded your node image than it would have to recast the new image. But otherwise this is looking good. What's hop onto the server to make sure everything looks okay. Great, so we have everything here. We could add an additional step to remove the tar ball that's transferred. But I'll leave that to you guys to add in. 13. Setting forever to 'watch' Mode on Server: Just like in the static site example, we've created a successful continuous deployment, but we're still not actually serving anything. So to make sure we've deployed all the files we need, a good first check is just to simply run node index j s here on the server. And we expect to get those to console logs and to also see them in slack with the staging variables. Since we are in the staging folder, that's where our staging pipeline deploys two. And we get those console logs as well as the identical slack logs. And we can even check here to ensure that the variables correct. And yep, it says Hello myTitle staging testing site. So it looks great. So of course, we can't come on here manually and issue node index.js each time. And we saw from the static site, even though it was a good example of how to use an SSH command. It also was an ideal to always start the server that way. And the way to do that would be with forever watch index JS. But the problem is here. Forever would be watching too many files. Because we have the node modules in here. Really all we want to watch is source and build. So there's an extra step we need to do. And that is to create a forever ignore file, which is just like the gitignore file. And so here we want to ignore node modules. And of course we need to add this to our pipeline. And actually in the build script, we just need to define it as an artifact. And then we can use SEP. Great. Had the same to the master. Again, making sure that that is the right folder. And adding it as an artifact. Alright, so deploying our forever ignore file worked correctly. And now we can start up forever without it having any problems since we've ignored the large Node modules folder. So the command is forever dash W, Start index.js. And of course that starts up in the background, but we still get messages. This is our live site staging. And just to simulate a change, we can just simply touch a new file. Let's call it a new file, TXT. And as soon as I do that, forever, notices that change and the site restarts. So that's great. Whenever we SCP files here to the server, forever will restart and would reflect those changes in the site. 14. The Big Advantage of Pipelines Testing in Staging, Merging to Master!: So now it's finally time for all our hard work on this build process to pay off. And that is the scenario where you want to merge to master. So you can think of all the work and changes we've done here on staging. A half a dozen solar commits as testing. And we've been making sure that everything works and now are quite satisfied with how the site is working. Because we've been following along in both the staging and master branches with the same exact steps. Except for our custom folders and environments. We expect the build process to also work exactly the same. So all we need to do to migrate approved and complete changes from staging is to merge staging to master and push it to our remote. And then BitBucket will do the rest for us. So let's give it a shot. So here now on the server, not in the staging folder, but in the master folder. And it looks like our build script worked exactly as it should have. And we can even carry out RNs. And we should be seeing the master exactly the master variables. And as a initial tests just like we did with staging, we can run node index.js. Great. Flag messaging is working. Console logs correct port and grabs up. And now I can pass the same exact command as we did in staging. Great. And to get that live preview, we have the master site and the most recent staging is here. This is the staging site. Great. So we have two side-by-side web apps. They're exposed with n grok. The staging is running at port 5,002 on our server and the master branch is running at 5,003. 15. BONUS: NGINX Proxy Configuration and React PUBLIC URL Environment Variable: So in the last lesson, we built a continuous deployment and continuous integration for two branches alongside each other, a staging branch and a master branch. And for that task, we were still using and grok as a simple tool to quickly proxy the site and get it at a public URL on the Internet. But as I mentioned a few courses back, I would show how I do that using a tool like engine X. And so I already have my engineers configuration file open here for Chris through DOT IN, which is my blog. So the root is my blog here. But if you hit these subdomains, you get what is actually in the background, a totally different node application. And I also have my portfolio, which is yet another application. So we can do the same for the two applications that we've just created. And I'm going to put them basically as the same pathname that we've been using. So we have this staging and we proxy pass 25,002. And we also have the master. And that is at port 5,003. We just format that small typo here. And we can restart and genetics. There's one small step I forgot to configure when it comes to Create React app. And we can see when we run NPM build, they give you kind of a warning. The project was built assuming it is hosted at the root. You can control this with the homepage field in your package.json. And of course with our new engineers configuration, we're at the skill share Type Script staging folder and skill share Type Script master folder, not at root. And we also don't want to set a static homepage in the package.json because we're using this to branch system locally in the Create React app GitHub repository. I was able to find a way where you can set this public URL environment variable, which is perfect for our use case. So we can add that just before the run build command. We will set the environment variable. And then our setup should work nicely with our engine X proxy. So we're just using the standard bash syntax, export public URL. And that's HTTPS is rooted IN skill share type script staging. That's for the staging branch or the master branch. Master. But of course then that means we should remove this n grok code. So we can actually just remove the function altogether. And we know that the URL will be available at froude, AT IN slash, Skill Share, type, script, staging, and master. So we'll just push this code through the typical chain starting in dev Ben to staging then to master, will still get our slack message right here. And should be fine. So we'll push this and see how it goes. So if we dig into the latest build process, we go to the build step. We can see in the run build command that Create React app understands what we're doing here. And they say it's assumed that's hosted here at the URL we've provided. And in terms of the server configuration, it works out perfectly. And we can see that the environment variable for the site title is also filled dynamically and we expect the same for the staging site. Yep, perfect. So that is how you serve to create React apps on separate branches via engine X, proxies. 16. BONUS: Using Docker as an Alternative to a Remote Linux Server: I wrote in the course description that if you don't have any sort of remote or Cloud-based Linux server, you would be able to use a local Docker Linux instance instead as a substitute. So this lesson describes and shows how you can set that up. So after installing Docker on your system, you can use this Docker file to create an Ubuntu instance with all the requirements you'll need. So I'll, I'll post this on the lesson notes. It includes things like curl, get, and it installs the node version that we need for this course. So once you get this file, you can build it. And you'll note I'm already directly in this folder or this Docker file is. So we can build it with this command. And now we can run that image with Doctor run interactive. And we're going to bind it to a port. So I've just chosen 777 arbitrarily because it's easier to remember and I don't think it should conflict with any other ports. And we need to bind that to 22 on the darker side, which is the default port for SSH. And we reference our copied image ID. And we are in. So now that we're on our Ubuntu instance, we should ensure that the SSH services up. So we will issue that command. It looks OK. And we should also set a password for this user. And that's what the past WD command. So if you're planning to use this for the whole course, and you'll be accessing it from public, for example, from Bitbucket, from an outside source, you should set a very strong password here, since I'm only using it for an example right now, I'm going to set a fairly simple password. And then you have to confirm it. And the passwords updated successfully. So it's important to keep this terminal instance running. This is our Ubuntu image which is running and Docker. So we'll open a separate terminal. And we can check if we can login locally over SSH. So that will be the root user. Just that our own localhost, one to seven, 0.0.0, 0.1. And the arbitrary port we specified, which is 7777. And provide the password that you set with the password command. And looks like we're in. Now to have an external source access, this image like bitbucket pipelines will have to open up port 777 to accept SSH connections from the world. So to do this, we first need to know this local machine. In my case, my laptop, what it's IP address is on my own Wi-Fi network. And so we can exit out of SSH here. And to get that, you can issue IF Config and grep for ion at 19 two and that should be able to find your address. If nothing comes up here, you may have to issue the full IF Config and look through the whole listing there. But we see that this laptop's local address on my network is this. So we can copy that now. And we're going to create a port forwarding rule on our router. So typically you can access the GUI of your Wi-Fi router with just putting in any browser, 19 2.168.1. And you may need to look into the technical specifications of your router to get the credentials to log in. But once you're in, almost all routers have this general layout. And what you wanna find is either in advanced or directly forwarding. It depends on the manufacturer. Typically you have to go to an advanced tab and then find the forwarding. And you can set up these rules. So we're going to create an IP v4 forwarding rule. And so ignore the known rule adder. And we'll just go down here. So we know what our local IP addresses, we just copied it. And we want that local start and end for to be the port that we specified. The external IP. In my case, for this router, you can leave it as this 0, this all zeros. Some routers, you need to leave it empty. Again, you have to read in the documentation for your router what the rule is. But just to remember it for our own axis, we will set the external port to all sevens as well. And you can also provide a description. I'll just say SSH to Docker and we want to enable it. And click Apply. Great, we see our entry here. So that looks good to go now to be able to access this from an external site or from the public, you need your public IP address, not this local one, but rather the one that your internet service provider sees or that you're. Router shows the Internet. The easiest way to do that is just to simply hop on Google and search what my IP addresses. So for, just for a simple example, let's say your public IP address is 1.5.2, 1.5.2, 1.5.2, 1.5.2. The way you can test if your port forwarding has worked is nearly the same command that we issued for the local test. So it's SSH root. And if we're assuming that's your public IP address, still port 7777. That should work. So what's actually happening in the background is you're hitting your router at this address. The router looks up the rule. It says, ah, it's requesting at 7777 and then forwarding to your local machine. In my case, this machine and Docker is then forwarding the 77772 port 22 as we set up in our image. And you'll be able to get to your SSH, or you'll be able to access the image through SSH. So if this command is working and you can access your image from a public or external place on the internet. You're all set and you can use the Docker image for this course. It's important whenever you're running anything or expecting bitbucket, for example, to login, that this is always running. And when you're done with this image, you can simply end its execution with the exit command and logging out from the same terminal where you started, it will automatically and its execution. And we can double-check that by issuing docker ps. And we see that there's no images running. Keep in mind that if you are going to use this Docker method, you will also have to update the server value in your repository variables. So user can remain as root as we said it in the course. But the server will have to be that public IP that you looked up, for example, on Google. And then your port forwarding rules should do all the rest of the work for you. And you should be able to access your local Linux instance in Docker, just as you would any other type of remote Linux box. You'll also have to redo the SSH steps, which we did in lesson four with the public and private key. In order for your local Ubuntu instance to trust SSH access from Bitbucket. So in this bonus lesson, we learned how we can quickly set up a locally running Ubuntu instance using Docker. We opened up a port of our choice. In this case, in this example, it was 7777 to the internet by creating a port forwarding rule on our router. Forwarding the external port 777 to the local port 777 on our local machine.