Transcripts
1. Introduction: Hi, my name is Mark
and I want to thank you for choosing this
training where we will go through the full step by step configuration process of Postgres database and PG Brest backup and restore solution. I've been working as a Debobs
and Cloud platform engineer for many years and creating upgrading and migrating databases makes up a
significant part of my job. I decided to create this
training where we can configure an infrastructure
that you can then replicate in your
production environment. Postdscule is one of the
most popular databases, but setting it up
can sometimes be a little bit challenging due
to many moving parts and specific tweaks that
are sometimes required for PostdscuL to run
exactly as we desire. This class might look
like very advanced one and aimed maybe at
database professionals, the fact is you should be
able to follow it even if you don't have any previous
IT experience at all. We will install Postgres
and PG Brest here. We will configure them both. We will explore almost
important options so you can have that entire
solution up and running. While it might be beneficial
to understand some Linux and to have some knowledge of most popular Linux commands, it is not a must have
prerequisite and you should be able to complete this class even without
that knowledge. What you only really is a laptop or PC and around an
hour of free time. If you are interested in
Dobs and Cloud technologies, then please remember
that you can join our community on automation avenue.com platform where you can learn all about Terraform, AWS, Cloud, Python, and many more Cloud and
DobsRlated topics. I hope that helps and thank
you for watching, Mark.
2. What we are going to build: Let's have a look what exactly
we are going to build, what our infrastructure
will look like. Running press on a
single instance is definitely not something you
want to have in production. You know the saying, two
is one and one is none. We will create a primary server first that will be acting as our active server and
will be accepting all incoming requests
from our clients. Once we have it up and running, we will add standby
server as well. That standby server will be
kept up to date by reading a stream of write ahead log records or wall
files in short. That means if primary server goes down or if anything
else happens to it, we can just quickly elect that standby server to become
our new primary server. That's what we will
set first then. But once we have that, we will add PG Brest, which is a backup
and restore solution for Postgres and that PG Brest will keep sending our Wall files and main
backups to remote location. It can be any remote
location of our choice. But what that means that
if even both servers, both primary and
secondary server go down, we will still be
able to rebuild them and restore all the data
from that remote location. And today we will
build all of that. We will configure everything, and we will then
perform the failover and various other disaster
recovery scenarios. Just to note in this video, I will install Postgres 16
on Ubu to 2404 LTS servers, but that shouldn't matter
because the process is similar for most
Linux distributions, and installation and
configuration process for Postgres also will be very similar for all
Postgres versions that were released in
the last few years. So you can follow this tutorial even if
you have, let's say, Fedora and want to install
Postdress 14 or Post dress 17. I'm going to use AWS S three bucket as remote
location storage, but PG Brest allows you to store walls and backups
nearly anywhere you want. So it can be another server,
can be some, I don't know, NAS storage or can be other
cloud location like GCP or Azure can probably be something
I'm not even aware of, but PG Bast is very flexible and it's pretty easy
to configure as well. That's why it's very common to see it in production
environment. Let's get started then.
3. Server preparation: My servers are going to be
virtual servers in AWS, and they are called
EC twos there. But again, it does not matter where your servers are located. If they are in the Cloud
or if they are on prem, as long as you can
configure them so they are able to talk to
each other, you are fine. So I will just launch instances. Quickly create servers
like what's called PG, and as I said, it's
going to be Ubuntu. 24 oh four. For instance type, let's choose something
from T three, maybe T three large. This is the size of our server. Keeper, I will add SSH
key and network settings. I want to change here
one thing, actually. First of all, you might see
auto assigned public IP. In production, definitely
you don't want to have public IP
on your database. But here, because I
don't have Nut gateway, it's called Nut gateway, my instances otherwise won't be able to connect
to the Internet, and I don't want to create
those Nut gateways. So let's just remember
that these servers shouldn't really have public
IPs. I should disable it. For these purposes,
it's just for testing, so it's okay for me to leave
them as they are because we will need simply to install some stuff. Let's go further. Security group, I can call it postgress and
basically what I need, the SSH is already allowed, but I want to add one rule. We need to add port 5432
to be also allowed. So these servers
have to be able to communicate on port
5432, basically. And as source, I want to choose my entire
subnet 1723 116. It will be our
postgress connectivity because your servers need to be able to talk on that port. Regarding storage, we can bump it a little bit, that
doesn't really matter. What matters though in AWS, but I have to enable encryption. That's it. And then I need two instances,
actually, not one. We will just launch instances. That's it. Two
instances created. Let's view them. We can
see they are pending. You can ignore
that previous one. I realize I've got
instance running. Let's maybe change their names. This will be our PG primary and this will
be PG secondary. We should be able to
connect to them shortly, initializing, but maybe
already, we can try. I will make the terminal
bit wider open TMUx now Control B and then
quickly shift plus five. We can divide the
TMX vertically, and we will log on to
both of our servers. I think I have to log on
as I want to user. Yes. Let's change the host
names as well just to make it perfectly
clear where we are. So sudo, host name CTL, set host name PG primary. And the same for secondary. We need to reboot
them after that. But now when we log back on, that name should be
the reflected. And
4. Postgres installation and initial configuration: All right. Now what we can do, we can click Control plus B and then column and type
setw synchronize panes. This way, we can type the same commands in both
Windows at the same time. Let's start with
pudo up get update. It's always first command
you should run anyways. Let's clear again. Let me make it full screen. I believe we now know what
we are talking about. Second command, not
entirely sure if I need it, but I usually install GN
PG two and W G tools. So it's psudoUT install
Gnu PG two and WG. Perfect. Let's clear again. And now we need just one command to install postgress
on bumto server. Guess what it is,
It's postgressQL. So do up to install postgressQL. That's it. Let's clear again. We can check if it's been installed using
System CTL command, systemctl L pipe, and
we'll grab for post. We can see Postgress 16
main service is running. We can check exactly
this service by typing system CTL status, Postgress QL at 16 main. Postgres QL. Sorry. And
it is up and running. If we control CA,
be clear again. If we want to connect to
the Postgress itself, we can use PSQL, utility. That is something
that is installed together with the
postgress and you can use it and then use user
Postgress. We have the error. It says, Peer authentication
filed for user Postgress. So you can see that we have to configure some bits and bobs. Let's stop this postgress first. We don't need it up
and running right now. We can use this command
but change status to stop. Oh sorry. Let's
switch the user too. So to the root user. So now we can run
this command again. Now if we check the status, we can see it's stopped. What we have to
do, we have to go to the data folder for Postgres. That's basically where
the database files exist if you write
something to that database, and this data will
be kept in VR, lip, postgres QL,
16 forwardlash MI. If we run LSLA, all those files and folders belong to Postgres
user and group. As I said, this is
the location where the data of that database
itself are located. But there is yet another
location and it's in AtC it's S at C pos grasquL 16 slash main and here's the location where
all the configuration files are being kept. Two very important
files are here. It's PgHBA and postgresqel.com, and we will play
with both of them, so we will know what
they are for exactly. Well, let's actually
modify that Pg HBA because we need to
change some things here. So I do mpgba.com. You can use Control
F to go forward. And you can see some
entries here at the bottom. Let's make it slightly
smaller so they fit one line. Okay,
let's close it. I hope this formatting
is a little bit better. So let's change
some things here. I'll press I, and here
in this first line, I will want to change
that last peer to thrust. Then entry for the
Unix domain also from peer to thrust and maybe this
third one for local host. Let's change this one
as well to trust. Okay, we can now press
Escape, Croll on WQ. Let's start the Postgres again. No. If I try to connect to that Postgress
database locally now, I should be allowed
in. And I am in. I can do stuff like
create database, Marek, let's say, and semicolon. If I press now backslash L, it should be displayed. Again, formatting not perfect, but you can see
database Marek exists. So we now know that
Postgress is up and running. Let's quit that backslash Q, and let's press Control
B again and column set W synchronize pins it will
unset it for us actually. Because what I wanted to show, let's connect again,
user Postgras. If I create another
database, let's say Jack, it will be only
on that secondary because these databases now
are two separate servers. They don't talk to each
other. Backslash L shows me Jack and Marek.
That's on secondary. But if I do the same on primary, we can only see database Marek. There is no database Jack. Oh
5. Configure primary and standby server: What we want to achieve
though is the first one, this PG primary, we want to
have it as primary database. And when we write a data
to this primary database, we want it automatically stream that data to secondary
database to this one. So they are both in sync. Whatever we write on primary, it's automatically
replicated to secondary. And if anything happens
ever to that database, then we can simply
shut it down and elect that secondary
database as our new primary. Let's do it then. We can stop the postgres service on both
of those servers again. And what we want to do is to amend that pggba.com file again. But this time on primary server, we will allow connections
from secondary. So let's just concentrate on that server on the primary one. We have to play with
this file again. Let's win GBA, and let's add maybe at
the bottom, new entry. We need entry saying
host, then replication. Then we need to specify a user that we are going to use for
that replication process, and I will just use
Postgres Postgress user, and then the IP address
of the other server, where the request for
replication will be coming from, and it will be our
secondary server. Let's check that IP address. This is its IP address. So 172-31-3017 mean exact
address, so it's arch 32. I mean, we don't need, but let's be specific and then thrust. That means this secondary server will be able to connect to the primary for the
application purposes as user post grass, and it will not require any authentication
to do so. Escape. Colon WQ. Let's cut that PDBA
again. That's our entry. And it's entry that
the secondary server will be able to use to
connect to primary. By the way, you don't need
to change anything on secondary because secondary is the server where the request
will be coming from, and that automatically will be allowed on that
secondary server. You only need to allow
connections in on the primary. Perfect. Here in the folder though there is one more file, and it's called pogresqel.com. And at this stage,
I want to change one value in that pogre.com,
and I will tell you why. Let's first VM to
that posgsqel.com. Control F. And this is the
value I want to change. Listen address, currently,
is commented out, which means it uses
default option. You can even see it here, defaults to local host, but we want to change
it to asterisk. And what it is, 1 second, let me change it first, and then I will tell you what it is. The explanation is right
here, to be honest. It says, what IP
addresses to listen on. That means by default, the postgress wouldn't listen on any external IP addresses or interfaces that this
server is configured with. What I mean by that, let's maybe go to primary
for a moment. For example, these are the IP addresses the primary
server is configured with. It has a loopback
interface, 127001, and it has ENS five interface configured with this IP address. By default, postgres
will only listen on loopback LO interface one, and not on that ENS five
external interface. That means these two servers wouldn't be able to connect to contact each other because they are going to communicate
on this interface. That's why we have to change
this listen address to all possible interfaces
that post grass has. You don't have to listen
on all interfaces. You can be more specific and choose just one or
two, let's say, but usually people tend to use just like me.
Just use Asterisk. That means every
single interface that is configured
on this server, the pose grass will
be listening on. Let's save it.
Escape problem WQ. We have to do the same on
the primary. All right. What we can do now, we can
start the Postgres service again on primary server because primary basically
is now configured. We can check the status as well. We can see it's
active and running. On secondary, if we
didn't change anything, the postgres should be down. And it is inactive
that, that's fine. If for some reason
you have it running, you have to stop it right
now because we are going to configure it as
our secondary server. So we go to the location where
Postgres keeps its data, and that was in var, lib, Postgres QL, for slash
16, for slash Main. This is all the data, and
we want to remove it all. Be careful because remember, all the data is
gone at this stage. Our database, what we call
it jack or something, that will be gone, and anything else that is saved
now will be gone. We're removing it
all. Alright, you can see nothing there anymore. What we want to do
now is we will run command called PG base backup. And if you want to learn
more about PG based backup, even directly on the server, you can run man PG based backup. As you can see, it says, take a base backup of
PogreQL cluster. And there is quite
a long grid with all the properties and
comments and arguments. But I will just talk you through the ones that
we are interested in. First of all, you have to
run it as user Posgress. That's very important. So
if you are a somebody else, you have to make sure that
you run sudo U Posgress. So it's small U, actually. I mean, I don't need pseudo here because I'm already
route, but never mind. It doesn't matter.
The important thing is to run it as user postgress. Next thing is the
command itself, so it's PG base backup. Then it's H and the
host the remote host. So who want to replicate from? We want to replicate
from our primary, and that's its IP address. So that's where we want
to replicate from. So it's 172-313-1144. Next, we can do dash
small W, lowercase. That means we can do it without providing
any credentials. And it's up to you
if you want to use credentials or no, but I'm lazy. We configured in PGHBA on primary already that we
trust that other server. So we can do that conversation without any
additional passwords. Now, dasat we want to specify the user that will be used
for the replication purposes. I used in PGHBA here
on the primary, I used user Postgres. But you can use
different user here. For example, you might
have dedicated user like I don't know,
replicator or whatever. You can call it
whatever you want. But whatever you use here, you have to specify it in
PGHBA on the primary server. But for me, it was
the same user, and it was the same user as
here, which was Postgres. Created by default when
we installed Post grasQL. Now, there are
some other options like F. This is the
format of the output. We can choose
plane, for example, but plane is used
by default anyways. In theory, you can
omit it, but I like to include it in case one day, somebody decides it should be not the default one anymore. Next, you can specify the
stream method here and it's X. The mode I always use is stream. You can either fetch
or stream here. It's the information, how you
want to fetch wall files, the write ahead log files. If you're not that
familiar with Postgress, basically, whatever
you do on Postgres, like we created the database all that information
is first written to a file called write
ahead log or wall in short. Only then once that wall
file gets that information, it is then and only then
written to actual database. This is to make the database
even more resilient. The thing is when you have very large database
to replicate, this process of taking PG
base backup as we do now, might take hours and
hours or even days. Remember that the
primary server is still running and probably keeps
receiving new information, and that information is
written to that Wall files, and we choose here
if we want to stream all those new Wallfles to this backup server as
soon as they are created, or if we chose
fetch method here, then those wallfles
will only be fetched at the very end of the
process, not real time. The stream method creates
separate channel that listens real time if there are any new wall files
created on primary, and if wall files were created in the
meantime on primary, it will get them not waiting for the PG base backup
process to complete. It will fetch those
wall files immediately, we grab them from
primary and save them on secondary as kind of
independent process. Then we have R, R, that tells the postgress to
create recovery conf and standby signal file so that server knows once
we start the postgress, this server will realize
it is stand by server and it is not supposed to write
anything to the database. It is only supposed to keep replicating data
from primary server, so it's not supposed to
write anything itself. You should not be
able to create or change any data on this server, and that's exactly what
we want at this stage. This is supposed to be only secondary listening to primary. Next thing is your
choice again, is S, and I will call it Mrek slot
or something like that, and dial C. What it is, well, these two
arguments go together. C says to create slot
and S is the slot name, and they called it Mrec slot. And they both combined, create what's called
replication slot. And it's not mandatory. That basically means that all
the wall files that are on primary the primary will wait
until secondary confirms. Yes, I've got all
the wall files, we'll confirm it
back to primary and only then the primary will
get rid of any wall files. It can actually
cause some problems because if your
secondary is down, the replication slot
is written on primary, which means the primary
will keep accumulating wallfles and you might
basically fill up your volume, but it has also
great advantages. Well, let's leave it as it
is because we just need one more argument and it's D D. It's a directory
where we want to write all those data that
we fetch from primary, where we want to save it on the secondary and we want to save it exactly where we removed all the previous files
and it's dislocation. All right. Let's press
Enter then. That's it. Well, this database is very, very small on primary, so it took only 2 seconds. But as I said, this process
as well can take days. Now the last thing we have to do is to start Postgres database. Now let's do a
little experiment. Let's go to primary,
Connect to database. We now have Marek and PostgressO
second. That's better. So we've got Marek, we've got Postgress and template
zero, template one. If we now create new database, like here, we've got Mark
one, two, three, four, five, we go now to secondary
and connect to database. You can see this data is
immediately replicated. So that means any data here, any new data is reflected
on backup server as quickly as the transfer between servers allow to replicate
that new information. If you run on the
primary server, select asterisk from
PG replication slots, you will see that Mex
slot was created, and it's actually used to stream that data immediately
to secondary. Let's try to write something
on secondary though. Let's say I want to
create new database here. You can see you will
not be able to because this database is currently
in read only mode. We will later on simulate
the failure of primary and we will elect this standby
server as our new primary. But before we do that, let's add yet another
resiliency feature which is external or let's
call it offsite backup.
6. Install and configure pgbackrest: And we will install a
tool called PG Burst, and we will use it to send
the main backups and we will also start archiving all the right head log files
that are being generated, and we will send all
that information to that offsite storage. This way, even if
both servers go down, both primary and secondary, we will still be able to rebuild it using that offsite backup. And as I said before, I will use AWS ST bucket as
remote location, but you can configure other
remote locations as well. Whatever you have
there. PG Brust most probably supports it. What we need to do
first, we have to create SSH connectivity
between the servers. Be PG BCRst will need to have full information
about both servers, primary and secondary, and it uses the SSH
connection to do that. PG BCRs does not
use separate user. It will use the
Postgres user that was already created when we
installed Postgres QL. So as you can see, I exited TMOx for now because
I don't like how TMOk sometimes
plays up when copy pasting between them and we
will do some copy pasting. Plus, there are various ways of creating and exchanging
SSH keys, but never. Basically what we need as
a user, posgresthough. So so do S posgreO both of them. We just create SSH keys and
the command is SSH keygen. Then just Enter, Enter, Enter. Do the same on primary. All right. Now if you do LSI, we can see new folder dot SSH, the same here was created. Let's clear again.
If we go there, we can see a pair of keys. This is private key and
this is public key. Basically, we have to copy
what's inside here to a file called authorized keys on that
server and the vice versa. So so we will copy this public
key back to this server, so they will trust each other. So we are in dot as a site. We touch authorized keys. That's the file we need,
or we can do VM as well, because we need to
add public key from here. This is the public key. I will copy that
and place it there. Now here on this server
we copy from here. That's all we need, really. Now, what's the
IP address again? That s17 23131144. Yes. All right. You can see
now it says PG primary. We Saaged from secondary
to primary server. So if I exit, we are
back on secondary now. But let's try the
other way around. Once again, what's this address? It's this one. So
let's try both sides. SSH 17, two, 313017. Oh. Yes. Okay, secondary. So we SSH from
primary to secondary. So we can exit. We
are back on primary. That works. That's fine. Now we have to install PG Brust. And the command is
up install PG Brust. I mean, let's exit. I mean, we need actually be
root or sudo Yes. Let's do it in the
meantime here. That's it. If we now
go to ITC location, well, you can see there is
a file called PG Burst. So that's the new file, and the owner is Postgres
and the same here. 1 second. What's going? Uh I was doing something stupid. But never mind. What I mean, this is the file we are interested in and
we have to modify that file because
this is basically the main configuration
for PG BcrstTol. If we cut that file, we will see that there
is some basic config, but it's just a template
for you to change. This config will not work
for you as it is now. So if you go to official
PG Bakast website, like I have the user guide, and there is loads
and loads of info, and I will skip this
part. Want you to know. It's really well written, and it says about
Wallfils encryption. But what I'm interested in is ST compatible object
store support. But you can find here Azure. You can use SFTP.
You can basically use storage in any remote
location you want. But I chose ST bucket
for that purpose. So let's go to AWS. Do I have any ST buckets
here in this account? Oh, there is one. I don't even know what
it is. Never mind, maybe let's create new bucket. Let's call it what PG burst. Autumsh autumn oven,
something like that. All right, and just
create bucket. Oh. Ah. No underscore but dash. Okay. That's better. But also important
is the location, the region AWS region, EUS two, if you use ST bucket
as your backup solution. We need that EUS two as well.
We have to remember that. Let's go back to the servers. And whatever is here we are not interested in because you can see it n for
different pages version, I have the compiq ready for us, so let me just paste it and I will just quickly go
through what it is about. Let me just pick IP
addresses again. And now just MPG backrst. DN and D, we just remove
everything primary convic, maybe let me leave
it as it is or go to secondary. Do the same. But the convic will be slightly different for
secondary and for primary. So this or maybe, let's do it on primary. This will be the name of the
folder in our ST bucket. This is how PG Brust will
send that data to tree. Retention full, it
means it will keep two full backups in the S tree, and it will
automatically get rid of the oldest one once
it receives new one. The bucket we created was that PG Brust or tomven
this is the endpoint. This is the important bit. You have to remember
where you created your ST bucket because you have to use that
information here. And then we just some info,
we want to compress it. It's always better to compress
because it will use less bandwidth and also less
space on the tray. This is the information,
backup standby. It's optional, but basically, we run the command,
you will see shortly. We run the command on primary, but the backup will be actually
performed by secondary. That is useful because your primary server is
usually the busier one, so it's good to offload
some work to standby. This is the user we
want to specify. This is the IP address of
secondary on secondary, there is an IP
address of primary. You can see the
difference PG two host, PG two being the secondary
and PG one host, we pointed to primary. They have to be
able to talk with each other using that
SSH connection and the location on each server where the data directory
actually exists, but it's the same on both. This is for wall archiving, that we want to compress that
data as well. All right. Let's save it then. And here as well, there
is one thing though, we have to change in
postgresquel.com file as well. Let's go to EC postgrascule
16 main postgresqle.com. We scroll down to this
section, archiving. We want to uncomment
archive mode. Archive mode will be on. This is for Wall archiving
means right ahead logs. It's the information that whatever is written
to the server, we also want to have
it in our archive, and Postgres needs to know what command is used to actually
archive those Wallfles. You can see below
here, archive command. You can also find
this information in PG Brest instructions. But what it is is basically
PG Brest stanza equals mine, archive push percent P. Well, I mean, if
stanza equals main, that depends on you
because we will create something called stanza, and we will call it main. But if you call your
stanza different way, then you will have to reflect
that in this configuration. That stanza is not created yet, so I know it might be confusing. And person P, that's the path
of the file to the archive. This is determined by
the PGbcrs itself. This is kind of like
parameter to not repeat information already
in pgbcrs.com. All right, but we have to
do the same on secondary. Now we can save them both. Let's restart the Postgres as well after all those
modifications. Let's see the status. I nearly forgot,
very important bit because we have to make sure I can't remember if
I change the IM role. We have to make sure that
our instance or server, let's say, is actually able
to write to the ST bucket. This is the primary and
I forgot to add IM roll. Okay. I've got a role,
it's called role, the server is able to
connect to a tree bucket, and that's very important
because that's what we need because that's where it is going to send the backups. Okay. That's better.
Fine. That's perfect. We now should be able
to create that stanza. This is the main thing that PG Brest uses for
the backup purposes. It simply synchronizes
all the information it has for both of the servers. I need the user to
be dress though, and the command is PG Brest. Stanza. And then
that's what I said. We're going to call it main. In all the documentation
in PG Brest, they always say main, but you can call it
whatever you want, and then stanza create. Maybe I will add a log
level as well here maybe. We will see on the console what's going on in
the background. Let's press Enter. As you can see it reads the
ST bucket information. I can see the folder in
the ST will be called cluster one and we can see
it completed successfully, and it took just 3 seconds.
7. Taking full backup with pgbackrest: If we go to a tree bucket now. Use this one, we can see now new folder created
called cluster one. And if you click on
that, we can see two more folders, Archive, which is going to be used for wall archiving,
right ahead logs, and the backup,
which is going to be used for full postgress backups. But please note, though, the full backup is not
taken automatically. It's up to you when it's taken, and let's just do that. Currently let's have a
look first inside main, and we have those two files, but it's 370 bytes. This is not the backup.
So let's go back. Let's go to the servers. And now to take the backup, the command is PG Brest. Da da stanza equals min. Again, you always have to specify which stanza
you want to use. Let's set the log
level to console and maybe details
detailed information, and then type
equals full backup. I think it's without dash. First, backup, you should
always do full backup because PG Bress can do also incremental and other
types of backups, but you need the
full backup first because currently
we don't have any. So if we know press Enter, full backup is being taken and it's being sent to ST bucket. All right, nice. What we can do now is type PG Brust Info
and you get the information, What is the database
backup size? That's the size in
Strib Sorry, no. This is the real data
here on the server, and the backup size
is actually much smaller because we are
compressing it, remember? And this is regarding
the wall files, right ahead log files. Each backup will have some wall files that are
bound to this backup. Then when PG Brest removes
that backup, if, for example, you send third backup and it only is supposed to
retain two backups, it will also automatically remove the corresponding
wall files. It keeps everything all nice and clean for us and we don't
have to worry about it. Let's just go back to buckets. Now, backups main and we can see more files and
we can see new folder. If you go to that folder,
you can see that bundle. This is the actual
information and some PG data, and it's all compressed exactly
as we have it configured. That's basically it for the configuration.
We've got two servers. The primary sends all
the data to secondary, and regarding the PG Brest, we can simply run Cron jobs or however
you want to run them. You can create a cron
job that will run this command and
take the backup, let's say, once a
day or once a week. Maybe on Sunday, it's up to you. But, there is one more
command I wanted to show. PG Brest stands a main check. That will show you
the information about the stanza itself,
and life is good. We can take another backup
if we want full backup, let's say, let's
send another one. So if I run the PG
Brest info now, we can see now we have even two backups in
the ST in the cloud.
8. Restore data from backup: Let's start breaking
things then. That's what the
resiliency is for. We should be able to recover
from various scenarios. Let's maybe make a big mess
by physically removing the entire running
primary Postgres server. Let's just go to
AWS to the EC two. Let's just terminate
that running PG primary. Instance state terminate.
Let's make havoc. The connection should be
gone already, as we can see, close by remote host, but we still have secondary this is actually very easy
to recover from. Because our secondary server has all the information
that we need, we only have to remove
that read only behavior. Remember, it can't write
anything. It can only read. We have to promote it then from secondary to become primary
and it's super easy. You just connect to the postgrap and you run select PG Rmote. It's actually underscore and brackets at the end
because it's function. Press Enter, that's it. It says true. From now on, I am able to write
to this server. Let's see. You've got to Mark
and Mark 123, four, five. As you can see, now we
can write to it fine. So what you have
to do, you have to simply point your DNS to the IP address of
this server now and you can carry on as
if nothing had happened. You will obviously need to build new standby server and create
replication this time. From this server, this
is our new primary now, but this is something
you can do later on. From customer perspective, all services are now
back up and running. I mean, once you point the DNS to this server. That's fine. But let's maybe break
it even further, yes. What if we have a failure
of the secondary server? Maybe what can we do it? Maybe let's say somebody
wiped out all our data, or maybe the data was corrupted. Yes, let's do that. Let's
stop the Postgress. Let's go to the main
database data folder. Yes. All those files
are Postgres itself, data files, so
let's remove them. All our data is now gone. You can see it's empty folder. To recover from Tree, though, let's close this. That server is already gone. Is it terminated? Before we recover from tree, let's open the pgbares.com file. Currently, we have information
about other server, but that server is gone. We have to treat our server, this server as the
primary one now. So let's just get rid of any line that talks
about PG two. And write that convict. Now again, as postgress user, you can run basically also from any user that has
the pseudo privileges, and it's PG Brest. Again, Stanza Stanza
is mine for us, restore and as you can see, already grabbing the files from a stree because it's
not a lot of data, it completed successfully
in not even 5 seconds. What does it mean? It
means if I do LL now. Can you see my silly mistake? Everywhere in the convic, I had pos grass
15, not Pogres 16. Technically, we
could recover from that pointing to a
different folder. But to do it properly,
let's clear that. Now, let me do one more thing. CD, somewhere here? Yes. Now we have 15 and 16. First, let's remove
that entire 15 folder. We don't need it. And now let's amend
our PG Brust convic. That should be 16 from
the very beginning. Let's run that command again, restore another 5 seconds or so. But we can see now at least
it's going to write further. Awesome. Now we can
start the Pogress again. And hopefully I
believe it's fine. It's running, but let's log on. Let's see what's inside. Pig backrest works as
expected as we can see. That's all for today. I hope you liked it and thank
you for watching.