Transcripts
1. Introduction: Hi, everyone, and
welcome to this Proxmox based home lab training. My name is Marko Bukowski,
and in this tutorial, I will show you what an
amazing tool Proxmox is. It will let you run
entire home server, including various operating
systems to fiddle with. We will also see
how to turn it into home media platform by
deploying so called R apps, and we will also have some
fun by running, for example, entire Windows operating
system in a Docker container. Generally, we will go through many examples of how
you can utilize it at home and how you can widen your knowledge
by using Proxm. You have no idea what
Proxmox is and you have never touched
anything Pxmox related, that's great because we will do everything from scratch here. As a project, you will build your own virtual
machine on top of Proxmox with specific
parameters provided, and if you also
want to learn more, not only about Proxmox but about Linux or programming or Cloud, then I also encourage
you to have a look at our automation
Avenue platform where you can find
hours and hours very good IT related
learning materials. Let's waste no time then. In next video, you will see what Proximox is and also
how to install it.
2. What is Proxmox and Proxmox installation process: In this video, I want
to present how to install Proxmox
virtual environment. If you're not sure
what Proxmox is, the Proxmox is Type
one hypervisor, or in simple words, it's a Debian, so it's a Linux distribution
based operating system that lets you easily run
other operating systems. What I mean, once you
have Proxmox installed, you can then also install
various like Windows, Linux, and other operating systems on top of that Proxmox hypervisor. You can run all those operating
systems at the same time. The role of Proxmox
is to distribute the resources like
CPU or memory. It will distribute
it dynamically to each of those
operating systems. You can also run
something called lexi containers or Linux
containers on Proxmox. But don't worry about it too much because I know they are weird names and at this stage, you should just be
aware that you will be able to run all those
operating systems and all those Alexy
containers at the same time when you have
that Proxmox installed. They can all run simultaneously. How to install
that Proxmox then? Got a very cheap
sell around 5,100 based four core minipC
that I bought for, I think, around 60 pounds
or something like that, which is around $70, I guess. Well, I had to add some SSD and the memory because it didn't
have any when it arrived. I will show you today
the installation process on that minipC. But in fact, you can install
Proxmox on nearly anything. You can install
it on old laptop, old PC, or even on some network
attached storage devices. To first install Proxmox
on that mini PC, you first need to download Proxmox VE ISO from
Proxmox website. Also need USB drive. I've got a 16 gig sound
disc SSD drive here, but I think even 4
gigabytes is more than enough because the
ISO image is 1.5 gig, but you have to be
careful because we will erase all data from this
USB drive in the process. So make sure you don't have
anything important on it, or you simply copy
it somewhere else. So we need to insert
it to laptop or PC, any other device, and then we just Google Proxmox download. That's it. First link at the top and see not only Proxmox VE, but you will also see Proxmox backup server and Mail gateway. But we are interested in
the first one, the top one, Proxmox VE, 8.4, and we
click that Download button. The fact that it's Proxmox
8.4 doesn't really matter because the installation process
didn't change for years. You will see the
process is very similar even if you run different
version of Proxmox. We just wait for the
download to complete. And next, we need a
program like Rufus, ballena etcher or other that is able to create
bootable USB drives. As you can see, I use
ballena Etcher for that. Already have my USB inserted. I just start BalanaEcher. I will pick the image
we just downloaded, and then I will choose the Sam disk QSB drive
as my destination. Then Blenaecher
will do the rest. At the end of this process,
you will see a lot of rubbish thrown by Windows, but don't worry about it. This is because Windows does not recognize that drive
and partitions anymore, but that's what's expected. You can close all of that
and just eject the USB. The process is now completed. We have Bable drive now. Now let's go back
to my mini PC then. I have that Ram and
SSD installed now. Next part is to check
some bios settings. From my experience, most
of the devices should have already bias
configured correctly for what we need to do here. But to keep this guide complete, let's just have a look at the bios option because
maybe in your case, you will need to change
some settings here. I connected the power cable and Ethernet cable to connect
it to my home network. But at this stage, we also need a keyboard, mouse, and an HDMI cable
connected to my monitor for those bios checks and for Proxmox
installation process. But once that Proxmox
is installed, you can disconnect the keyboard, mouse and HDMI cable
because Proxmox can be controlled remotely
over our home network. You will see what I
mean. You can see, I didn't insert
the USB drive yet, but it shouldn't really matter
if you do it now or later. To get into my bios
on this minipC, I have to start my
minipC and then I keep pressing the delete
key on my keyboard. But depending on the bios, it might be a different key like F two or F 12, for example. So you have to figure
out which key you have to use to get into the
bios on your machine. We're in bios, we
are interested in advanced options in
CPU configuration, I have to make sure I have
virtualization enabled. I have Intel processor, so it's called VMX. But if you have AMD processor, you should be looking
for something like AMD V or something similar and simply make sure that
option is enabled. This is interesting
because security, you can see secure boot. I heard in many tutorials
that you have to disable that to install Px Mx, but that's interesting
because I've never done that. It's been always enabled here, and it's been working fine. But I don't know, maybe
simply it doesn't matter. What we need, though,
is the boot sequence, and you can see boot
order priorities, and we need to have our USB
device as the first option. I mean, at least, it has to be before our hard drive boots up. But I don't have to change
anything here because it's already set to USB device
as a first boot option. So now I insert the USB drive But if you've already done
that, that's fine. It doesn't really matter.
But at this stage, you have to have it
inserted and I go to save changes and reset, this might look confusing, but reset really
means just reboot. I will not reset any settings. I will just save the
changes and I will reboot the mini PC and it should
boot from the USB drive now. That's me in the background, hello and here I need to
choose the first option, which is highlighted by default, install Proximo V graphical. You will be presented
with license agreement, super exciting lecture that
everybody reads, I guess, you simply have to click that I agree in the bottom
right corner. Now we have target
hard disk options. In my case, it's very easy
because I only have one drive, the SSD drive that
I've just installed. But it might not be
the case for you. Maybe you have multiple drives. Maybe you have
machine I don't know, ten hard drives and four
SSD drives. I don't know. What I want you to be aware if we click on those
options, by default, the EXT four file
system is chosen, and I'm okay with that. But if we click on
that drop down Wu, you will see that the
XFS is available, but also ZFS and better FS. The ZFS is really
interesting one and you can see it has many
rate configurations, which can be used for either
speed or redundancy reasons. But at this stage, I only want you to
be aware of that. If this is your first
Pxmox installation, then picking the ZFS option
or ZFS, I should say, might not be the best option because there are some
bits and bobs that you have to know about to make sure that FS is
really what you need. And we will talk about
ZFS a little bit later. So for now, let's just leave
that EXT four file system, but just be aware
that you can change that setting here by clicking that Options button
if you want to have different file system for
your boot system drive, okay? So let's just click next in
the bottom right corner. And here we have to
choose the country, and the time zone
and keyboard layer should be chosen automatically,
so just click next. And here you pick the
password for your root user. So you have to type
in the password, and then you have to type
in again just to confirm. Regarding the email, it's up to you if you want
to use your email, but any email will do,
even the fake one. There's nothing wrong providing
your email here because you might have
some notifications from Brock Mx when
something goes wrong. Yes. Now we have management
network configuration tab. If I click on those interfaces, you can see I've
got four available, but only one with green light. But maybe you've
spotted already that I have a network interface
card with four ports, but only one cable connected, so it's chosen automatically. Second option, host name
is if you wanted to use it instead of IP addresses if you need a fully
qualified domain name, that's something you put here. You can change it, but I
will leave it as it is. But next thing is
the IP address, the gateway, and the DNS server. And what's that IP address, first of all, where the
Proxmox took it from? This Proxmox configured
this IP address as 196-168-1115 because
that's the information it got from my DHCP server, and everybody has a
DHCP server at home. It doesn't matter if you
are aware of that or not. The DHCP server or dynamic host configuration
protocol server, in most households will run on the router or in one device, let's say, that
you received from your Internet service provider. All your home devices will get the IP address from that
DHCP server or service. Proxmox will work perfectly fine if you just
accept what's here. You can right now click Next. But well for your laptop or phone or whatever
device or TV, it doesn't matter what
IP address it has. For Proxmo you are much better off to have a so called
static IP address. And I will try to quickly
show you how to assign static IP address without making it like, DHCP training video. So basically, what I have to do I have to log on to my router, the device that I got
from my ISP provider. You will find the credentials,
how to log onto it. You will find the credentials on a little sticker on
the device itself. It will say like a management
IP or something like that, and the credentials
will be there. For me, I know the
address is 192 1681 dot. Username is Admin
and the password, I have to read it from that
sticker and that's it. I can configure now this
device, this router. What I'm interested
in is the N portion. I have it here in the
bottom right corner. You can see it's the same IP
address that I just typed in my browser, 192168 dot one. It's this device itself, and then below is the
IP address range. When I see range, that means
it's something DHCP related. You can even see DHCP is
enabled on the next line down. If we just go to the tab, can see again on the left is the IP address of
the device itself. Below is something
called a subnet mask, and on the right, you have the beginning IP address
and the ending IP address. And the ending IP
address will usually be set to.254 or
something like that. But I change that number to 200. Why do I change it to 200? Because I'm narrowing
down the scope for the IP addresses that DHCP server can assign
to other devices. All my home devices, if they get the IP address
from this DHCP server, they will only get IP
addresses from the range 19216813 up to 192168 1200. And that remaining range, which is from.201 to.54
is available now for me, and I can assign
those IP addresses statically for any devices I want to configure statically. What I usually do I'm not saying that you
should do the same, exactly the same, but I just want to tell you how it
works in my network. I usually assign 192.168
1201 to the Proxmox itself. This is the first available
static IP address. And what I do next,
once we start creating virtual machines and
ELAxy containers, et cetera, they will
have so called IDs, like VMID or container ID, and everything I
run on my Proxmox, I will also set static IP
addresses for those devices. I create virtual machine
with ID of two oh two, then I will statically assign IP address of 192.168.1.202. If I create container
with ID, let's say, 210, then I will assign statically IP address
192.168.1.210. It makes life so much easier. I know it's more difficult
at the very beginning, but it's so much easier
later on to work with all those virtual machines and containers in the
Proxmx environment. If we go back, so you
can see the gateway, that's the default
gateway I meant. And the DNS server can
also be left as it is. But the IP address for
the Proxmx itself, I will change it to 201, and I can be sure
this IP address is available because none
of the devices in my home network
will be able to get IP address from range
of two oh 12254. I know it's complicated.
Don't worry about it. It's very difficult to explain everything about
DHCP in 2 minutes. Here, it looks like a summary, but it's pretty important
button at the bottom. Automatically reboot after
successful installation. And that's actually
something I want to untick and only then I
want to click Install. The installation process will start and we just have to wait. But, you know, that tick, I don't know why it's there by default because by unticking it, it's easier to remove
USB drive in right time. If after installation,
this device rebooted, it would still try to
boot from that USB drive. So we have to remove it first. Otherwise, we will have a
vicious circle, you know. Our mini PC wouldn't want to boot from the hard drive,
from the SSD drive. I would want to boot
from the USB drive, which we don't want
because this process installs the Proxmx
on the SSD right now. You get the message
that installation was successful and look
at the next steps. It says reboot and point your web browser to the selected
IP address on port 8006. You can write it
down because that's how you will access
your Proxmo server. As you can see, the IP address is exactly what I
assigned statically, 192-16-8120 we need port 8006 to access Proxmox
user interface. I will remove the USB drive now and I will
click that reboot. Now my minipC will reboot, but now it will boot from the SSD drive and this video looks terrible.
Sorry about that. But it doesn't really
matter because that's not how we're going to
use our proximo. Basically, now, you can
simply turn it off. We know everything
works as expected. We can disconnect the keyboard, the mouse, the HDMI cable. You can hide that mini PC wherever you want,
and from now on, you can use other laptop
or PC or whatever device you you can access your Proxmx remotely by
just using your browser. So it's 192-16-8121
on port 8006. The username is root and
the password is the one that you created during the installation
process, and that's it. First thing you will see
is no valid subscription. You do not have valid
subscription for the server, but don't worry about it.
Nothing wrong about it. This is simply
true. I don't have valid subscription, but
it doesn't really matter. We can just click Okay or close
it simply because we will fix this and some other things by running just one command. But let's first have
a look what's here. In that data center summary, you can see the status is green. The bottom, it will show
again that information about no subscription,
but don't worry about it. Then if you go to storage, you will see local and you will see that the content
is for backups, ISO images, and
container templates. This is the default
location where all those items will go,
you will see it later on. Below is the Local LVM, and this is the default storage for disk images and
the containers. But this is something you can reconfigure if you
have more disks. You can point it to
different locations, and this is the place where
you can even reconfigure. Maybe your backups will be
completely somewhere else. I saw images again
somewhere else. You are free to
reconfigure it the way you want. But I
will leave it as it is. I have only one
SSD drive anyways. If we click that
PVE and the discs, you have again that
LVM and that LVM, as you can see, it's red. But it doesn't mean there's
something wrong with it. They simply it's simply assigned space to LVS rather than used. Might be a bit
confusing. Never mind. No worry about it. Especially
if you have one drive only, that I think always will be red. And the LVM thin, you can also have the
information about that. But then you have the ZFS again. For me, it says, No discs unused
because I only have one disk and it is used
by the proximox itself. But if you had some spare discs, you can keep attaching
them to that mini PC, and then you can
create ZFS pool, and you can create
raid configurations, but you can see compression,
A shifts, et cetera. This is like that's why I
kind of omitted this topic because if you are
just starting with Proxm is something
you can hear about. But at this stage, I would
just leave it as it is. Then if we go to
that subscription, you can see we have
no subscription key. Yes, because I didn't
pay for subscription. You can pay for
enterprise grade support, but for my home usage and
for everybody at home, it will not be needed, probably. I am fine having no
subscription key. And it's not that it's
not legal or something. Here in Data Center in support, it will also say no
valid subscription. Again, that's fine because I want to use Proxmox for free, and they let me use it for free. And we could fix that stuff manually directly
there in Proxmox, but there is much easier way. If we Google Proxmox
community scripts. And if we go to that first link, ProxmoxVE Helper Scripts. Really nice page. It was started
and maintained by TTEch. It was very well known YouTuber that I really loved watching. Unfortunately,
TTEch passed away, but these community scripts are now maintained
well, by community. So there will be
more than one guy now maintaining this rep very, very useful for running loads of stuff using
just one command, which you will see shortly. Let's click that view script. You will see all the
scripts available. There are different categories, and the one that
we are interested in is Proxmox and
virtualization. And here we should see
somewhere it's over there, Proxmox VE, post install. So it's the script
that you are supposed to run when you complete
Proxmox installation. And instead of manually
changing million things, this one script will
do everything for you. You can see that you have to run that command in
Proxmox VE shell only. That means I will copy this
command here on the right. You can copy it because
it's pretty long, and it even tells you, be careful when copying
scripts from the Internet. Always remember to
check the source. And the source script
is also available, so you are free to check that, but I know that it's okay, so I just copy it
and we go back to Proxmox to the
PVE, to the shell. That's where they ask us to
run it, and I just paste it. I just press Enter and it asks
me a series of questions. Do you want to run that
post install script? Do you want to correct
the V sources? This is the packages are
available for my Proxmax server, and I say, yes, and
now the repository. Currently, what is configured
and why I get those errors, I have Proxmox configured
to PVE repository, and it's only
available to users who have purchased the
Proxmox subscription. But I didn't purchase the
subscription, so it asks me, do you want to disable PVE
Enterprise repository? Yes, that's what I want to do. I just click Enter. And
it asks me if I want to switch to a repository
called PVE No subscription, which is for users
with no subscription. Yes, that's exactly what I need. So I just click Enter again. Now, the safe package
repositories. I'm not going to use
SefRD but I just click the PVE test repository you can give advanced users
access to new features, blah, blah. Well,
it's up to you. I will click yes, but really, I'm not saying that
you should click, yes. It's up to you if you
want to do that or not. But I don't mind, and
now I just click Okay. Now it asks, if you
plan to utilize a single node instead of
a clustered environment, that's exactly
what I want to do. My setup is very
simple, single node, no HA, so I can disable
high availability. The Px Max will use
less resources. It will write less
stuff to the disk. I'd say yes and even says, you enable it later
on if you want to. You're not losing something
permanently. It's fine. Coral sings the stuff that might write a lot
of things to your disk. I believe that's the main reason why it asks if you
want to disable it. Again, you can explore
if you need it or not, but I definitely don't want it, and I want to preserve
my disc for longer. I want to disable it. Update Proxmk should be
pretty up to date, I guess, but yes, especially
now when it's new install and nothing
is running on it. It asks if it should
reboot the proxmox. And after bigger changes, yeah, it's usually
a good option, especially now when, as I said, there is nothing running, so we definitely want to reboot it. So it might take a while. You
can see connection closed, but it will only be
there for a while and the proxmox will
be up and running in a few seconds. Maybe
a little bit longer. I heard a little beep, so it's now reboot. I go to PVE summary, I can see the spike on my
CPU. That was the reboot. You can see the processor
for core Celeron and 5100. Now the proximaxV updates are green and the repository
is not green. It's like a little
warning saying non production ready
repository enabled. With no subscription, that's
all we can do, so it's fine. If you click that, it says the no subscription
repository is not recommended for production
use, which is fine. My home is not a
production environment, but I get updates for ProximxV. You are wondering
the subscription, it will still say there
is no subscription key. That hasn't changed, because I still haven't
got subscription. So this is also expected. But basically, the
process is now completed. You can now start creating virtual machines in
the top right corner, for example, create VM. So you can create
virtual Windows machine or Linux machine or
whatever you want. And you can see the VMID.
That's what I mentioned. If I use two oh two as
my virtual machine ID, I will also 192.168.1.202
static IP address. It's so much easier.
You just check the virtual machine ID and you already know the IP address. You don't have to
look it up. You can also create LLC containers. Also can change ID and
match the static IP, the last digit of the static
IP to match the container. E. The installation and preparation of your Proxmx
server is now completed. In the following
videos, you will see what virtual machines or
containers you can run, how to turn your Proxmox into home media streaming
platform using so called R STAC or how to bind some storage between
different virtual machines. I hope to see you
there. Thank you.
3. Install Ubuntu on Proxmox: Ubunt 24 oh four
has been released, and it's not just
another release. It's LTS, means
longtime support. And so we can expect that it will be around with
us for quite a while. So today, I wanted to show
you how you can install it as a virtual machine
on Proximox 822. So how do we do that? Well, we first have to download the ISO, the image of the Ubuntu itself. So what I will do,
I will just go to Google and search
for Ubunt 2404. Let's go for that first
link at the very top. First thing I noticed is much larger in size
than the previous one, 22 oh four LTS. It's over six gig,
as you can see, but never mind,
let's download it. That might take a while.
Now when it's done, I can go back to Proxmax
and upload that ISO image. I've got the previous
one, EB one to 22. As you can see, it's 3.8 gig. I will choose Upload. We'll find my file in downloads and you can
see this one is 6.1 gig, quite a difference here,
and then just click Upload. The task was okay. That's
what you want to see. Lo and close it now,
and we've got that ISO now available
within Proxmox. We can click Create VM, and my VMs usually
starts from 200. So this will be 205 because
I already have two oh two, 23 and two oh four. So this will be next one up. I will name it Ubon two. 20 404. But it doesn't really
matter what you put here. It's just for your information. We can click next. Now the OS. What I have to do here is to just click the
correct ISO image. And it's the one we've
just downloaded. You go to 24 oh
four desktop MD 64. We leave everything ers as
it is and click Next again. In system section, I only
click QIO agent because that will help you with
display resolution and some other aspects later on, for example, in remote
desktop sessions. So that's the only thing
we have to do here, and we can click next. For discs, it chooses
32 gig by default. You can go down to 25, but do not go any lower
than that because that's the minimum recommended
size for Ubon to 24. It used to be 20 for Ubon to 22, but it's 25 for Ubon to 24. You might also want
to click this card, which is basically a trim
option for your SSD drive. So with this though, I can
click next. Now the CPU. CPU basically, I mean, at home, you should
always choose host, which you can do by clicking
this dropdown arrow, then scroll to the very
bottom and here it is host. That basically means you kind of disable virtualization
of the processor. And if you are unsure
what it is about, remember that we
always have that help button in the down left corner. So if you click that,
it will give you the instructions about the
current pub you're working on. If we search for
the types of CPU, like here CPU type, you can see that QO can emulate a number
of different CPUs. But here, in short, if you don't care about live migration, you can set the
CPU type to host, which will give you the
maximum performance. So maybe your use case
is different than mine, and maybe you want to virtualize processor because maybe you
care about live migrations. But because I don't
always choose host. So I always have
maximum performance. Hope that makes sense. So we can go back, so it's host, but I will also give it four
cars rather than just one. You might also want to do is
click that advanced button. If you scroll down, there
is an interesting option. Allow guest OS to use
one gig size pages, which might be a good option. I usually turn it on, but I will leave everything
girls as it is. But my point is this stage it might be different
for me than for you, but these are my settings
and I will just click next. Now memory, I have the
ballooning device enabled, which means I can set up
different maximum amount, and I can then pick
minimum memory used. My system can have floating
amount of RAM for this VM. If you can't see that
ballooning device, it's probably because you don't have that advanced option click. I can click Next to Network, and I don't want
to change anything here in Network portion. So I just click next again. This is just overview
of your settings, so you can just
have a look again and click Finish if
you're happy with that. My VM is being built
here on the left, 205. So you can either
right click on it to start it and to
connect to the console, or you can just select it here and do the same
using these buttons. So it doesn't really matter. That's exactly the same. I
will click start here maybe. And once we can see
green play bottom, we can console to that machine. And it will ask us if we want to install Ubuntu, and that's
what we want to do. I will click Enter, and
now you will just follow standard installation
process for Ubuntu 24. We just pick the language
next. C skip that. For me, it English okay. I'm on wired
connection right now. Click Next, and we need
full installation. So I will leave it as it
is. Install your Bumpu. By default, interactive
installation is picked, and I'm
okay with that. Apps, I don't care about any
apps really, but never mind. And this is up to you, but I will click them both
of these options. And yes, that will
erase the disk 25 gig disc that we
created in Proxmox. Happy with that. Click next in Aismatic computer,
I will call it. I don't know, meaning PC and
we choose the password for the system. That's it. Select your region here and
just review the options. I'm happy with that, so
I will install Ubuntu. This process will take a while, so I will fast forward it. And it took a while
over 10 minutes, I think, but it's now completed
so we can restart now. You will receive
the message saying, please remove the
installation medium, but you can ignore that. Press Enter. That's it. I can now log on as user Mark and puzzword
that I've just created. You have the welcome message, and that's basically
the process completed. There are some additional
questions from bontu but no, I don't want to
share system data, and I will just finish it. That's it for today and
thank you for watching.
4. Install Arch Linux on Proxmox: Today, I will go through Arch
Linux installation process. I will install it on
my Proxmox server, but you should find
this video useful even if you want to install
it on any other device, like maybe your laptop, maybe directly on your PC, or maybe even Br metal
server you have somewhere. First, we need the
Arch Linux ISO image. I will simply Google
something like Arch Linux Download and I will choose that first
link, Arch Linux downloads. Here is where we can
download the ASO images, and you can see that ISO
image can be burned to a DVD. I don't think anybody
does it anymore, but it can be also
mounted as ISO file. That's what we will do
in our Proxmox server and can also be directly
written to a USB flash drive. That's what you might want
to do if you install it on laptop or PC and
not on Proxmox server. But anyways, we scroll down, and there are some
locations, I mean. So you can pick the location
that is quite close to you. We'll be at the bottom, I guess. So for me, maybe I will pick
bitemark dot code dot k, let's say, and what you
need is that very top link. It's just dot ISO image. You can see it's 1.1 gig, so we just click on that link, and it starts downloading. And now the download
is completed. Again, if you want to install that arch Linux on laptop or PC, you can use a program like Balenaecher or any
other program that is able to create bootable USB
drive from that ISO image, and then just stick that USB to your laptop or PC and
boot from that USB image. But because we are installing
it on Proxmax server, the process is a
little bit different. So we go to Proxmax server. You can see I have some already
running CaSOS and ubuntu, but we go to local PVE, and this is where we upload the ISO we've just downloaded. So I just click that
upload button and I will select the file that
has just been downloaded. I'm sorry, not this
one. It's this one. I downloads Arch Linux. I will select it and
just click Upload. You can see task Okay. That's what you always want to see ready. So we can close it. And now in that local
PVE in ISO images, you can see we have
arch Linux available. So we are ready to create a VM. I will just click
on that, create VM. I will pick the ID for that VM. Maybe I will change it to
two oh five because I've got two oh four already for bunt and CSOs was installed
different way. That's why it's so low. But I usually pick
numbers above 200. I will name it Arch Linux. And we can go next to the OS. In OS, we just have to pick the image we've just
uploaded to the Proxmx. The type can be left as it is, so we just click next. Here, I usually click QM agent, but I haven't figured out if it's useful for Arch Linux yet. Basically, you can also leave it as it is and just click next. Disks I believe two gig is
minimum for Arch Linux, but I will give it
some extra space. I will give it maybe 20
gig. That should be plenty. I will add discard, which is the trim option
for SSD and SSD emulation, leaving everything else as
it is. Then I click next. CPU, I always use type host, which is here at
the very bottom. So host basically
disables emulation. And if you want to
read more about it in this help bottom,
if you click it, you will see that if you don't care
about live migrations, you can set your CPU to host, and it should give you
maximum performance. That's why I always choose it. So actually, it's a lot of
useful information here. You can read it
all, not only head, about the CPU type,
but never mind, let's go back to
our installation. I will give it
markers, maybe two. And what I usually
pick here as well, is allow host to use size pages. That's what I usually enable. And believe that's all.
That should be fine. So you can click next.
Memory, I use ballooning, which means it's like a
floating amount of Ram. By the way, if you can't see it, you probably don't have that
advanced button clicked. So, maybe minimum,
we will set it to 1024 and maximum to 2048. Something like that should
be more than enough. Then next, network, I don't really want
to change anything. For me, it's good as it is. So I just click next, and this is just
confirmation you can go through and see if yeah that's really what I want to configure. So we just click Finish. Our VM will start
already here 205, you can see the Arch Linux. What we can do now,
you just click on it and either click
the right bottom of the mouse and start it here or we can start
it there as well. So we start and then we can also console to that instance. You will see this is
the installation guide, and it will automatically
start in 7 seconds anyway. We can actually
check other options, but usually the top one
is what you go for. I click Enter and we start
the installation process. And it will stop at this stage. So why it stopped here. It actually says above. You can see it will require
connection to the Internet. So if you are
connected to wireless, for example, if this
is your laptop, if you're installing it on your laptop and you use
wireless connection, you will have to use
that IW CtL utility because you need that device
to be connected to Internet. Arch Linux will require that connection for the
installation process. So you type something like IWCtL then the dash passphrase. Then here in quotes, I believe, it would
be your password. I mean, you shouldn't
type your password. You should type actually what your password is for
your Wi Fi connection. Hope that makes
sense. Then station, usually it's W zero
and then connect. And here is where you
type your Wi Fi SSID. So whatever it's
called, I don't know, maybe my home
network or whatever. But because my Proximox server doesn't have even Wi Fi card, it uses wired connection. I can ignore that
step entirely because it's only needed when
you are on Wi Fi. And remember, even if you have laptop or PC that is
currently on Wi Fi, you might still temporarily
even connect it with the Iternet cable only
for the installation process, so you don't have to play with
that IWCtL command at all. So it's up to you. In my case, I can now either just type Arch Install to run installation
configuration script, or I can do it old fashioned way and configure every
single setting manually. But the arch installation script is much more user friendly. So let's just use that. We type Arch Install and
just press Enter. So you get that
configuration guide, let's call it, where we can choose all the settings
that we are interested in. Art install language English. Yes, that's fine.
We can leave it. Mirrors. If we click Enter, you can see mirror region. If we click Enter again, it will give you all
the regions available, but you should simply pick
one close to where you live. You can see at the bottom here, press forward slash to search. So if I press forward slash, and then you and I, I've got United Kingdom, United States or reunion. For me, it's United Kingdom, and I can press Tab
button to select it. You can see that little
asterisk shows up. So I press Tab again to
deselect and tab to select. Press Enter, and we can go back. Now the next option, local, that's actually keyboard
and language settings. So if you are in US,
you are probably okay. You probably don't have to
change anything because you can see down
here below info, keyboard layout is
already set to US. Language is ENS, so English
US and encoding is UDF eight, which is probably okay for you. But for me, I will change it
because what I need is UK, which is here just above US. The language I want is EN, but it's ENGB and the encoding, yes, it's okay, UTF
eight. You can go back. We are in the main menu, but now that if I go up
again to localise, we now see all those settings
that are currently picked. If I go even further up, my region was United Kingdom, keyboard layout is
okay, et cetera. Let's go further
Disc configuration. Let's click Enter. And you can pick here manual
partitioning, if you wish, and go through that have
disc configure et cetera, partition it disc
anyway you want. But you know what? If you use the best effort default
partition, it's so much easier. I will show you just
click Enter here, and then you pick the
volume we've just created for that operating
system in Proxmx, which is this QMO hard disk. So again, I will press
Tab to select it. The little asterisk showed
up, and I press Enter. Now we can pick our file system, and honestly, I don't know if you want to use XFS or F two FS. Probably the choice is between extended four and better offaS because Batter offaS is
a newer file system, so maybe I want to select that setting.
I will click Enter. It will ask me if I want to create sub volumes with
a default structure. Yes, that's what I
want, and it will ask if you want to
use compression. So I will pick that as well. So again, with up arrow, you can see all the information, what we've just configured. It will create small FAT
32 volume just for boot, and then the Better fAS as
main storage, that's fine. Next option is disk
encryption and eff click Enter and then enter
again, encryption password. You can see Enter disc
encryption password or leave blank for
no encryption. For me, I don't want
to encrypt the disc, but maybe you won't,
if you leave it blank, there will be no encryption. If you use the password here, your disc will be encrypted. That's all it is. Let's go back. Now, the bootloader, the group is selected
for me, and it's fine. But if I want it,
I can change to different one. But
let's stick to group. Swap, true? And yes, that's what I want to
live. I want to use swap. Host name you can
change it if you want. You know, you can call
it whatever you want. No. I will just leave it. Doesn't really matter. Now the root password, if
we click Enter, it says, Enter root password, leave blank to disable root. And again, it's up to
you, but personally, I would just disable root
because in the next step, we will be able to create
a user with pseudo access. And many new Linux versions have root account disabled
by default, I mean. So it's your choice. If you type the password here, you will have root account. If you just leave it blank,
you disable the root. And that's what I will do.
I will disable the root. I will not type anything here. And now we have user
account, user account. Other user, I will
create a new user. I will call it Marek, password. Okay. I have to type it
again for verification. And now it asks me, should Mark be a super user? Well, yes, remember I don't
have Root account, so, yes, I want to have a user
with superuser privileges. So I click Enter,
yes, and that's it. And if I want, I can
add another user. I can create as many
users as you want, but that one is fine for me. I will just confirm and exit. And now the profile. Let's click Enter to get into that and the type,
click Enter again. And this is interesting
one because you can choose minimal server or X Org, but I would go
personally for desktop because you can see
it installs like VIM, HTp et cetera, but it also prepares your
desktop environment. If you want to have that
graphical user interface, the desktop profile is really
the one you want to go for. So I will click Enter, now it asks me which ones
I want to select. Most people will
be familiar just like me with nom or KDE, and you use Tab again
to select your choice. But note that you can actually
select more than one. You can select all of
them even if you want, and then you can
switch between them. But to keep things simple, I will just use
gnome and that's it. So I click Enter. It asks
me for a graphic driver. And by default, it's
all open source. And this setting is
okay for Proxmox. But let's click Enter Anyways. So if you install this arch
Linux on your laptop or PC, maybe you have AMD, Intel or N Video card, and do you want to install
different drivers? Then this is the way to do that. But as I said, Proxmox is okay
with the open source only. We also have Griter. Ritter
is just your login page. If you click Enter here,
the only other one I heard is SDDM, but GDM is fine as well. It's just login page, so it's
not that important really. Okay, so we can go
back, arrow up. Can see, you can review
it again, arrow down. Next option is audio, and by default, it's
no audio server. Well, you don't want
to leave it as that. Let's click Enter and
you've got two options for Pipe wire or Pulse audio. And I would go for pipewire
because it's newer option with real time
multimedia processing and some other advantages. But you might consider
pulse audio only if you find some issues
with the pipe wire. Me, it works great. So
I will pick pipe wire. Now, the kernels. By default, it's a default Linux kernel. But if you click Enter, you can see we've got other
kernels as well. There is a hardened one, and
there is a longtime support. And the thing is,
if you click tab, you will note you can
select more than one. And you know what
it's not that stupid because maybe you
want to play with that later on and booting your Linux using different
versions of kernel. But for this
scenario, I will just use basic one, the
default one, I mean. Enter additional packages.
Let's click Enter, maybe. It says that you can install additional stuff in the
installation process. The truth is you can do
it later on as well. But it says, you
know, if you desire a web browser such as
Firefox or Chromium, you might specify it here
in following prompt. You know what? Why not? I mean, I will probably need both, so I can type Firefox
and Chromium, as it says, I have to
be space separated. So it's not comma separated,
it's space separation. And just click Enter. It will
verify at the same time, have a look. Yes,
they are listed. If you had error
on previous stage, that means you probably
misspelled something or the package is called
something different. We can go now to
Network configuration. Basic click enter, and I
guess nine out of ten times, you will just go for
Network Manager. But if you want, you can also
configure it manually here. If I click Center on
manual configuration, it will ask me to
add interfaces. It will see I have it
connected with Internet cable. The interface is ENS 18, then I click Center and I
can choose if it should be DRTP dynamic host configuration
protocol or static IP. If I want static, I can
create static IP here. So for me, something like 192, 168, one, maybe
what 25 slash 24. Default gateway is
the IP of my router, which is 192 that 16811. DNS maybe CloudFlow 1111. You can confirm and exit, but I will actually
cancel because I will go back and I will just use
that network manager. So, you know, let
me go back again, but you can see how
much easier it is. Just use Network Manager rather than typing
everything manually. What is your choice
again, Time zone, loads and loads of time zones. You can scroll down,
but it's much easier to press forward slash again
to search as suggested. So I will forward
slash to London. I've got Europe
London Time Zone. That's what I need.
Automatic timesing, I would always leave it true. You want to use NTP
for various reasons. Optional repositories,
I'm not interested, so I can simply install it
now. Let's click Enter. Again, Enter, it says
press Enter to continue. We formatting the drive, it will follow with
the installation process and once it's done, we should have
running arch Linux. This took a while,
probably several minutes. But it now asks us, would you like to
shrout or Chroot into the newly created
installation and perform some post
installation configuration? Well, no, that's not what
I'm interested in, then, so I will just pick now, click Enter, and it will reboot. Oh sorry, it will not
reboot on its own. You have to type reboot. So Enter and the Arch Linux
should be now up and running. We can pick first option, Arch Linux, and that's
it. Now we can log in. By way, this is
that GDM, remember? It's called Gretter
in Arch Linux. I can type my password. And that's our Arch
Linux installed. You can pack the
tor, I just skip it and you can see those nine dots. You can see the
Firefox, for example, and Chromium has been installed because we added it as
additional packages. We've got already VIM installed and some other stuff like HTp. You can also type here,
let's say, terminal. If you want, you can
make this window bigger. You can also type HTp here
to see the CPU utilization, memory utilization, and
all the nice stuff. If I want to open Firefox, I can use this, go there
again, and Firefox. Okay, that's it. I hope that's helpful, so
see you next time.
5. Install Linux Mint on Proxmox: Linux Mint is one of the most popular
Linux distributions, and it's perfect
for Windows users who want to switch
to Linux because Linux Mint makes that migration as seamless as it is possible. And Linux Mint is
based on Ubuntu, but uses different
desktop environment. It can use Cinnamon, XFCE or made desktop
environments. While, Ubuntu, by
default uses nom. So okay, let's just
install it then. I will install it today
on my Proximo server, but I will add some extra
info where necessary. And this way, you should
also find this guide useful if you want to
install Linux Mint on other devices like PC laptop, server, mini PC coffee machine. I don't know, wherever
you want to install. So let's get started. And first, I need to download the
Linux Mint ISO image. So to do that, I will
just go to Google and search for something like
Linux Mint download. And I will pick that
first link from the top, which is directly
from Linux Mint. So you can see Linux
Mint 21.3 has a code, Virginia, and here is where
you can choose your version, your desktop
environment version, I mean, I will
download the cinamon, but as you can see, there is another one XFCE
or mate addition. So let's just scroll up and download the cinnamon edition. The installer is
2.9 gig in size, and you can also find here installation guide
release announcements, and this is the link if you want to download it using
Torrent downloader, but I will scroll down
and you have mirrors. You can use either word
mirrors or you can scroll down to whichever
location is close to you. For me, it's United Kingdom, so I have a little
bit of scrolling. Maybe UK fast sounds good. Now the downloads just started, I have to wait for the ASO
download process to complete. The ISO is now downloaded. I can see it in the folder. And if you want to install it directly on PC or
laptop or server, that's where you would use
programs like ballena Etcher that are able to create
bootable USB drives, and you would want
to write that image to that USB using this program. Once you have it on USB stick, you just slide that
stick into the laptop or whatever device you're
installing the Linux Mint on, and you would boot
from that USB stick. However, for us, it's a different process because I'm installing it
on Proxmo server. So what I have to do
go first to my Px Mx. I will go to Local
PVE to ISO images, as you can see, I
already have some, and now I click Upload to upload the file that
I've just downloaded. So I will select that image, which is currently in
my Downloads folder, as you can see, Linux Mint, and then I will
just click Select. Linux Mint 21.3, yes, that's what I want, and
then just click Upload. Should see task Okay at
the end of the process. That means the file was uploaded correctly
to the Proxmox. When I close this window, I should see it available here in my Proximox Console in
available ISO images. That means I can
now create a VM. I can create virtual machine. I click that button in top
right corner, create VM. I will pick the ID for my VM. Doesn't really matter, but I
will pick maybe two oh six. I already have two oh
four and two oh five, next one up is two oh six. I will call it Linux Mint. Can't have spices here, so I will add dash.
I can click next. I have to pick my ISO
image that I just upload. And it's Linux
Mint, I click next. I can leave everything here
as it is, click next again. Disk, by default is 32 gig. I will make it a
little bit smaller. Maybe 20 gig should
be more than enough. I will add discard which is trim option for the SSD drive, I mean, and I can click next. In the CPU tab, I don't like
having processor emulated. I usually pick host, which is at the very bottom, which means I've got the
best performance available. I will also increase the
number of cars, maybe two. Then we can click
Next again, memory, it's set to two gig, which you could give
it a little bit more. But in this instance, I will
just leave it as it is. Click next again, Network. All those settings
are fine for me, click next again
and just confirm everything if
everything looks okay, configuration wise,
it looks fine, so I will just click Finish. Now the VM is being built two
oh six, we already can see. We don't have name yet, but
shortly it should show up. There it is Linux Mint. Now if I select it,
I can navigate using either right mouse button or I can use these buttons here
in the top right corner. I will just start
this virtual machine, and now I will console
to that machine. You can see it's connecting
and it's Start Linux Mint. So I will click
Enter. It might take a while because it's
not normal bootloader, you will have a kind of
working Linux mint already, but I will show you what I mean. So what you can see now, it's an instance of Linux
Mint as if you were running it from the
CD or DVD drive. Remember that live CDs. So at this stage, Linux
Mint, lets you play with it. If I click this icon, I already have M and everything, but the performance can be
terrible because for me, it's fine because my ISO
is currently on SSD drive. But if you booted it
from the USB drive, your experience might
be not that great. So what we have to do next is click that Install Linux Mint. So we install it properly
on the drive rather than previewing it directly from that ISO. Hope that makes sense. I will double click that.
And this should trigger the proper installer. You
choose your language. For me, English is okay, even though it's not
my native language. If you are in US, you
can leave it as it is. For me, it's okay.
I click Continue. Install multimedia
codec. I'd say yes. I save us sometime later on. Let's click Continue.
Now a little warning, erase disc and
install Linux Mint. This will delete all your
programs, blah, blah. Well, I don't have any programs. It's a fresh installation
anyways, so yes, I'm fine with that and I
will click Install now. This will just double check if you are sure you know
what you're doing, because this basically
will erase everything on the drive that we allocated
for that Linux Mint. And in Proxmox, we did it during the virtual
machine creation process. But if you, for example, install it from USB drive, you have to be sure you
choose correct drive because you can erase
wrong drive at this stage. So yes, double or triple check that this
is what you want to do, really. So we click Continue. It will ask us for time zone. London is okay for me,
at least, continue. Now pick your name, Smack. I'll just delete that. What already exists
on the network. Add Linux Mint. This
is the user name. I can change it if I want,
but that's fine for me, and we will create a puzzled. To repeat it here, and now
we can click Continue. This process will take a while, so I will just fast forward
to when it's completed. All right. I took
around 10 minutes on this minipC
around the Proxmxon, but at last, it says, installation complete, and it tells you, installation
has finished. You can continue testing
Linux Mint means you can stay here as you are and use this
kind of live CD environment, but I want to complete
this proper installation, so I will restart now. Let's click Restart now. Now it says, please remove
the installation medium. But that is true if you run the installation from
the bootable USB stick. Now is the time to remove
it from the device you are installing Linux Mint on
and only then press Enter. But because I installed
it on Proxmox, I don't even have
that USB stick, so I will just press Enter. Now the proper installation of Linux Mint asks me
for my password. You can see that welcome screen, you can read more about
Linux Mint itself, I will disclose it and
you can see you've got loads and loads of
programs already installed, pre installed during
installation. You also have Modila
Firefox and most of the stuff you would expect from operating system,
it's already there. So I hope you will enjoy Linux Mint and I will
see you next time.
6. What is LXC (Linux container)? How does it work?: In Proxmox, we can create virtual machines using
this Create VM button, and then we can create
Alexey containers or Linux containers using that create CT button
in the top right corner. But what that LLC
container really is? I mean, what does it do
exactly in the background? Or how does creating Lexy container compare to
creating virtual machine? Ever create a Alex
container in Proxmox, did you notice that you do
not have to install anything? We just run the container with no prior
installation needed. We will explore today
what is that LLC, how it works, and how it compares to the
virtual machines. Let's first have a look
at the major differences between VM and container
creation process. I want to just
quickly show you what the create VM options are
so we can see exactly how very different these
available options are when we compare them then to
create container options. What is the reason for that? Why these options differ a lot. But it's also worth to mention that I'm using my
Proxmox server heres, but you have to be aware that
those virtual machines and LAX containers are not
Proxmox specific things. They are Linux thing.
So you can run VMs and Lx containers on
any Linux operating system. For example, to create
Alex container on Ubuntu, you'd have to install Lx D, Linux Demon, and run a
lot of commands in CLI. Well here in Proxmox
the Proxmox gives us that nice little user interface where we can do the same
with just a few clicks. That's why it's so much easier to see all those differences here on Proxmox than on any other operating
system really. If you ever created
Virtual Machine in Px Mx, then you should be familiar
with all those options. I like here, of course,
you have to choose the Virtual machine
identifier like maybe two, four, five, and
then use the ISO. Here, I also want you
to note something. Look at the sizes of
those ISO images. For example, Linux
Mint 3 gigabytes. Windows, 5 gigabytes.
They are huge. If I choose Linux
Mint, let's say, I can then choose the
guest operating system, if it's Linux, if it's Windows, if it's Solaris or
other operating system. This one is Linux, and if we go next, I'm
not creating one. I'm just going through
these options. That's what we should
concentrate on here. I can choose different
graphic cards. I can choose different
machine types or bios even. I can choose have
a choice of tree here or Scazzi
controllers, blah, blah. If we go to disks again, I can choose what type of device I want to
choose, et cetera. And then let's go
to CPU as well. I can choose what
type of CPU or how this CPU should be presented
to this operating system, to that Linux mint
that I'm creating now. Have, again, choice of many, many different CPU
types like AMD, as you can see, Intel,
blah, blah, blah. Okay, I hope you
know what I mean. Let's close this. It's not exactly what I
wanted to show you. I want to show you
how it differs from that create City options. Create City, I mean, here, it's not big difference. Let's say two, four, five, our identifier, I can
choose host name as well. I need to choose pass
but then if we go next, it asks for template, and I don't know if I have any. I mean, I've got one,
it's for Debian, but look at the
size now 126 mega. Remember, the ISO was five gig for Windows or
three gig for Linux. Here we've got Debian, so it's also Linux
operating system, but the template has
just 126 mega in size. We will go back to
those templates, so don't worry too
much about it now. Let's go further disks, not much choice again. The storage is already
chosen for me, and I can only change
the disk size. Right. Maybe I will
change it to 50, but that's basically
all I have here. I can't choose if it's ID
or Scuzzi or whatever. Let's go next, and now
I've got CPU look at that. I only have the choice of how many cores I want to
assign to this container, but I can't choose
if it's Intel, AMD, or any other processor. And memory, again, I can only choose the amount of
memory and the swap. So if I go next, just some
basic networking stuff. And then I just go next, next, finish, and that's it. Look at that. It took what? 2 seconds? It says Task okay. And if I click on it,
I can just start it. That's job done. My LACC
container is up and running. I can see already CPO usage. But if you did the same
with the virtual machine, that would be the
point where you would start your installation. Here, we didn't
install anything. All right, so let's
just go back, okay? Let me exit that. So what's going on here? If I go here first to
these city templates, as I said, it's just 126 meg. Why is this city template so much smaller than
Virtual machine ISO? It's because this
container template is mostly consisting of
just basic user space. And I know it might not tell
you much at this stage, so to explain that better, let's break my current
Proxmox machine into three main
separate components. I mean, this is the
main Proxmox servers, and what are the three
main separate components that make it work
in the first place? The first component
is the hardware. It's pretty obvious because you have to install
Proxmox on something. You need a motherboard,
you need a CPU, memory, hard disk, some network interface
card, et cetera. A miniPC laptop or personal
computer will do just fine. If you're not sure how to
install Proxmox on them, then I have a video
that takes you step by step through the Proxmx
installation process. But anyways, what that
Proxmox installer does, it first installs so
called Linux kernel. The Linux kernel is a
component that knows how to talk directly to that
process or memory or hard disk. So if I go here to boot
under the root directory, if I run LS LI, I can see this is
actually my Linux kernel. I mean, my Proxmox runs
on this Linux kernel. You can also run command UNM R, which basically shows you
the same information. 681268 12 PV. And
interesting thing, you can run up search
Proxmox kernel. This command will show you all available kernels for this proxmx it's loads
and loads of kernels, as you can see, I can
scroll up and up, many different kernels
to choose from. Be kernel is something
I can replace. I can install different kernel. But the thing is,
you are not able to talk to that kernel directly. By default, the kernel
doesn't even do anything. Kernel is not something
for us users to play with. The only thing you
can actually do is to install different
version of kernel. That's very important component. This kernel was first
created and released by Linus Thorwalz in 1991, nearly 35 years ago, but it's basically
still the same project that was originally
created by Linus. I mean, yeah, of course, it grew in size a lot since then, and a lot of new
things were added. But basically, Linux kernel is one constant
specific project, and its major focus is just to be able to talk to
the computer components. But you might ask
if users cannot talk to this kernel directly, then how we can interact
with our computer? And the thing is users we have to use so called user space. This is the third
major component installed during
Proxmox installation. User space includes, for
example, file system. So if I go to root folder, let's say, if I run LS LA, all those folders
you can see here, they are actually part
of that user space. Then if I go to maybe Ben I
run the same command here. What you will find
here, you will find all the commands that we
can run on this system. I can scroll up. You can see it's loads and
loads of them like WG or word count or
watch or who am I? All those commands are here in this forward slash
Ben directory. Basically, what is in
this folder dictates what I can run as a user in my
command line interface. Even shell, this
command line interface is also part of user space. This is how I interact
right now with my Proxmx. If I run echo shell, see that I currently
run Bash shell, but that's not the
only shell available. There are many other
shells available. But what I mean, it's simply
part of user space as well. The fact that I can
run commands here, this is because I have
this shell available. And also, if you have
a desktop version of Linux operating system, then your user space
will also have a graphical user interface that you can use to interact
with your computer. Like I mean, currently, I am on my Ubuntu and I have graphical user interface here yes so I can also just click, buttons on my mouse, and basically I run
kind of like shell, but from this point from
graphical user interface. But what's important here is
that during installation, Px Mox created this
entire user space that I can now use with
all those folders, all programs, all commands, and all that stuff, so I can now communicate with my server. I can type some crap here, like who am I maybe was one of the commands
available here, and it says, I am rude. But the fact is
my shell does not know how to speak to the
CPU or hard drive direct. All the shell does
is simply sending so called system calls to the
kernel Kernel has an API, which is a little
entry point for this user space for this shell that is inside user space and kernel can read the
whatever crap I typed here and it can
take that information and translate it to the low
level instructions that a CPU or memory or hard drive
can actually understand. That's basically
very rough overview of how computer works process. But going back to that
user space, in fact, Proxmox runs on Debian
Linux distribution. Basically, if you compared Proxmox user space to
native Debian user space, you wouldn't find
many differences. The main difference
would be that you have some Proxm specific files that were added to
this user space. If we go to at C, I mean, that's a lot of stuff, but if we go to PVE, this PVE folder and all those
files that we can see here, these are Proxmog
specific files. That means you will not find these files on any
other Debian, or, in fact, you will
not find them on any other Linux distribution,
not only Debian. This is kind of Proxmog specific user space
that was created. These user spaces will differ between different
Linux distributions because user space belongs to completely different
independent project. That project was called
Gnu and over time, many people had their own idea what an operating system
should look like, what folders should be
included in the file system, and what it should basically generally look like
from user perspective. They started creating
their own user spaces. That's why we ended up with not just one Linux distribution, but countless of them. If you take alpine Linux, it will have different
file system, different tools, different
commons available. And let's say Centos or
Ubuntu or Linux mint. But there is one very
important element. The kernel used in all of
them will be the same. And sometimes you might
hear the time that the kernel is interchangeable. That means that you can swap
one kernel with another and your Linux distribution
will still work fine because the kernel
is one ongoing project, and all Linux operating systems will use the same kernel family. I think some of you
might say, Mark, it's not entirely true. I know I'm oversimplifying some stuff here
while going along, but I just want you to know that I'm aware of that because, for example, processor
architecture needs too much, and there is different
kernel family for RM processors and
different for X 86. But I don't want this
video to be 35 hours long, and this is just rough overview
because what we have to concentrate on today
are Lx containers. Let's go back to that
main topic then. What is that LLC container? What is the template? The LLC container is simply a new user space that you
downloaded as a template. That template is mainly
just a user space. So file system and some binaries and basically
some folders and files. And you can apply that template to your running Proxmox server, and all the hardware components stay exactly the same as
Proxmox can see them. We don't change any CPUs or memories or hard
drives, as you could see. And in fact, LLC will
also use the same kernel. This Proxmox kernel will be shared with this
new LLC container. So that LLC template
that you download in Proxmox is only a
simple file system with some applications that are
run by Kernel as kind of a separate entity
because Linux kernel has some interesting features
like C groups or name spaces, and it can use them to isolate the container from
your Proxmox server. And Linux kernel
can also control the resources that are assigned
to that Lexy container. That's why we could choose
how many CPU cores we want to allocate to container or how much disk space we
want to allocate to it. But we couldn't change the
type of the processor, for example, because there is
no virtualization involved. We basically use the same
components as Ppmox does. When you configure and
start your Alexy container, you don't have to install anything because there
is nothing to install. As already mentioned, the
hardware stays the same. The hardware drivers are already running in the kernel
and what kernel does, it simply just starts some
services in that LXC, there is not even a proper
boot process involved. Kernel simply starts or stops
some services. That's it. The advantage of that that the ELACy containers
are very lightweight for the system because it's just another user space that
Linux kernel has to control. But this advantage is that all those templates
you can apply, they have to be Linux
kernel based templates. If we go back, if I go to CD templates and I search
for new templates, what you will see here
is we run Debian, but we also have Ubuntu, Fedora, line Linux, arch Linux, et cetera There is
quite a few of them, but they are all Linux
based templates. You'll only find those
Linux distributions because the template has too much current
available kernel that is already
running in Proxmox. This is very different than when you create
a virtual machine. Because when Proxmax
creates a virtual machine, you have to go through
the installation process because Proxmox will
virtualize the hardware first. The system will think
it has separate CPU, separate disks, separate
memory modules, et cetera, and then the
system will also create its own kernel and
its own user space. So the disadvantage is obvious. There's a lot more resources needed to run the
virtual machine, but advantage of that
is also obvious because you are not limited to Linux
operating systems then. You still can run Linux
as a virtual machine, but you can also run Windows, you can run free BSD, Solaris or any operating
system you want, really, because you create separate hardware which is virtualized, and the installer will
create its own kernel, so that limitation is gone. That's all I wanted
to say today. So I just hope it was helpful
and thank you for watching.
7. Proxmox helper scripts - single command installer: Did you know that you can
install anything you want on your Proxmox server
using just one command? Let me show you how it's done. What you need to Google is
Proxmox helper scripts. Then we can click
that very top link from the Github pages. And then we can either choose one tool from given category, and you will see it's
quite a few of them. Let's say media photo, you have all my stuff like Plex Media server or
Jolly fin or Sonar. It's just one
category, remember, for operating system, you've got the newest Ubuntu
available 24 oh four. All of that can be installed
using just one command. I think the easiest
way is to simply go up and here in
this search window, you can just search for
whatever you're interested in. Maybe Casa OS, it's very interesting
project that can also, as everything else,
can be installed on Proxmox using just one command and it's this command here. Let me copy it. We just then
go to our ProxmoxT our node. In my case, it's PVE, it's called PVE and I just
paste it here. That's it. Let's click Enter. It will ask us if we want to
proceed. Yes, of course. And you've got option to use default settings or
advanced settings. So I will use default. As we can see, the
container was created on the left,
container number 100. Now it is being updated, and we just have to
wait. It takes a while. It's installing
some dependencies, et cetera, even says patients. But anyway, we can see CASA operating system is
being installed. That's now done, and you can see CASA OS setup should be reachable by going
to following URL. We just copy this URL. We paste it in our browser, and believe me or not, this is our operating
system already for us. You can just go, create a
username, create a password. I will save it as
well, and that's it. That's our Casa OS
up and running. It can't really be
easier than that. But bear in mind, not all links will
complete the installation. Let me show you what I mean. Let's check that it was
Ubuntu, the newest Ubuntu. It's not the container
I'm interested in. I'm interested in the VM, and this is the newest bundle
that actually is available. It was just recently released. You can see we can also
copy this command. It will also install everything in one go like using
just that one command, but have a look
more info at blah, blah, blah. What it is for? Let's have a look. First,
I copy this command. I go back to my
Proxmx to the node, PVE in my case, to the shell, and I
will paste it here. Let me maybe clear it first. So, exactly the same process
as we did with Casa OS. Now we'll just press
Enter, and I just wait. Ubuntu is being
installed. Proceed yes. Again, just default settings, so everything is done
for me automatically. After a while, a new
VM should be shown here probably with
the ID of one oh one. Oh, there it is. Virtual
machine ID is one oh one. As you can see, it's being
created here on the left. It. But notice that this time it didn't give us the link
to the operating system. We can't access it immediately. So it's been installed,
but it says, set up Cloud in it
before starting. And it gives us actually link to believe it's the same 2072. Let's have a look. Yes, it's exactly the same link as here. So either copy from there
or we just click this one. And what it is, it tells us what to do.
Setting up Cloud in it. You don't have to know
what cloud in it is. It tells you exactly
what has to be done. For example, you have to
set up the root user. You have to create
password for that, change the upgrade
packages, et cetera. You simply have to follow
these instructions. To complete this process, this ubumtu is already here, it's installed, but it's not fully configured. So
bear that in mind. But it's still extremely
streamlined operation, everything is done
still in one command. You just have to configure
some basic things. And the last thing I wanted
to show you is if you don't really know sometimes
especially more advanced users, they don't want to
just run command and they don't even know what's going on in
the background. Well, you can see these
are links to the Github. So if you go back
up the very top, you've got that icon
here, view on Github. What you can view there is
actually the source code. So if you go to that
install folder, you will see all of those shell scripts that
run in the background. Like, for example, we
installed that Casa OSS, so we can find
that shell script. I will be this one.
And you can see exactly step by step
what is being done here. And this script is
very short, actually. But if we go to probably
Divan will be home. That's not that long as well. I'm sure there will
be much longer ones. Let's have a look at graphema
a little bit longer. But my point is, this is
simply open source project. You can check every single
command and you can check line by line
what is being done, what is being
installed, et cetera. So it's like full transparency. And you can see also the author, TTechs TT ECK I'm very grateful because
that helps me a lot. So yes, that's all
I wanted to say. I hope that helps, and
thank you for watching.
8. Run Windows in docker container :): I recently came across a very interesting project
on Github that allows you to install and run Windows in a
Docker container. In fact, you can run this way any Windows you like from
Windows six P upwards, and it's fully
automated process that also handles the entire Windows
installation process for you. If you like me, use Linux or Macos for
your day to day tasks, you know that the r is
that one or two apps that run only on Windows. You have to have that copy of Windows somewhere if
you like it or not. Running windows in
a Docker container is so convenient and
the fact that it's so easy and it's fully
automated makes it a perfect use case for me.
Let's see how it's done. Will use Ubuntu 22 oh four that I have installed
on my proximo server, but you can use obviously any system where you
can install Docker on. Let's just console into my VM. This is the Ubuntu and we need a browser and we need to
search for Docker Windows, but it's DOC KR. We are interested in that
first link at the very top. That's the project.
We can scroll down and there is
a read me file, which explains what to do. The most common would be either Docker Compose file
or Docker CLI, but you can also use Gubernatis
and if we scroll further, you can see there is multiple
Windows versions available. Obviously, you can
scroll further, but we will do it later on. Let's go back to the Docker file to the
Docker Compose example, I mean, and maybe we can use that because it's the
cleanest, I would say. So to run Docker Compose, I need two components. I need the Docker itself and the Docker Compose.
Let's install it then. Let's open terminal,
and then you run sudo update. Let's clear that. And now we need sudo apt install Docker dot
IO and Docker Compose. Well, the thing is,
I have it already installed, so it
didn't do anything. But if you haven't got those
components installed yet, that's the command you
have to run anyways. So I'm in my sorry, P PWD, I meant. I'm in my home
directory, home Marek. There are some
files and folders, but let's maybe create new one. I will create I don't know,
Docker Comp directory. We will keep our Docker
compose files there. I will CD to the folder. And let's go back to
the instructions. This is what we need
for our composed file. This will install Windows 11, but let's see what Windows
versions we have available. So win 11 argument, we'll install Windows 11 Pro. Win ten is for Windows ten.
We've got Windows seven. We also have Windows XP and we also have some
Windows server versions. Let's start maybe
with Windows XP because the installer
is just 600 megabytes. So let's maybe start
with this one. How can we do that? We can
copy our Docker compose file. Let's just copy everything. Let's go back to terminal
and I will VIM let's call it Windows XP dot amo. Now we'll just paste everything. And what we have to change
is the environment, which is Windows XP. And that's in
theory, all I need. But if we go to those
instructions to that read me file and
scroll further down, we can see that we can
select different languages. For example, English is the default language
that will be downloaded, and it's fine, but you
can choose different one. But what I want to change is the keyboard layout because
the default is the EN US, which means English
but US keyboard. I've got UK keyboard, though, so let me copy those two, and this is what we have
to add to environment. Let's go back then environment. Let's just paste it
here and I need UK. Something like that. Let's
see what else is there. What other options we have. We've got storage location. By default, it's var win. Let's be more specific, maybe. Maybe let's copy all of that. If I go back, we
will paste it here. And I want to be
more specific here. I don't want just win. I will call it Win XP. So I know this
folder will consist only stuff that is related to this instance for Windows XP. And this is optional. And basically, let's
just leave it as it is. Let's see how it works. I will save this file,
so escape column WQ. We can cut it again just to
have a look. That's our file. And we will be able to watch
all the operation like ISO download and installation
progress using this port. This is VNC port 8006, and we'll be able to
watch all that process by connecting to this port. So now the command I need is
Sudo Docker compose, then f, then the name of my file, which is Windowsxpt
and the word up. Now I click Enter and we can go here to Local Host port 8006, and we can see entire process. The Window six speed
is being downloaded. And you can watch the Windows
installation process, which has been automated. That means we don't
have to do anything. We can just watch. All the formatting, all the other tasks are
being done automatically. After a short while, the
Windows XP is fully installed, and I didn't have
to type a thing. We went through fully automated
installation process. You will see that default user was chosen for us and
it's called Docker, and we will have a look at that. It's another environment
variable. We can change. But basically, we have Windows
XP fully up and running. You can now personalize it,
you can do whatever you want. After 30 seconds or a minute, you will see this Windows XP, this is a confirmation. It's not some dodgy
Windows XP image. This is genuine Microsoft ISO, which you can verify with MD five hash or anyway you want, but you will have
to activate it. Means, yes, you still
need a Windows key, et cetera to activate the
windows. But that's fine. Never mind. I wanted to
show you something else. I can, of course, now shut the instance down,
turn off computer. But what I can do, if
we go to terminal, you can see that Windows
is still running. Is basically this container
with Windows XP inside. What I can do, I can control C, or press now Control C. You
can see gracefully stopping. This is very important
because that means it's not like abrupt operation
which will break your windows. This is done really nice way. It will simply turn off
your computer for you. So you can Control C here, and your Windows XP or any other windows will
be gracefully stopped. Let's now get rid
of this instance, maybe, and let's install
something newer. First, let's go to
that var folder, and this is the folder
we named Windows XP. We renamed the win to Win xP
in our Docker compose file. Let's get rid of that as well. Let's go back to the
previous folder. We still have this
Docker compose file. Let's rename it. We should
now have Windows 11. Let's amend it then. First thing I want to change
is from Windows six P, the version argument
should be win 11. Because if we go back to those instructions
in read me file, we will see that this is the
value I have to have there to install Windows 11 Pro
environment version win 11. What about the
default user Docker? I don't want to
be called Docker. I want to be called Mark. Let's see how we can change it. We scrolled through quite
a few interesting options. So user name and password can be specified using these
arguments again. So let's copy them.
I will choose Marek. And for password,
we'll be pass one, two, three, four super secure password and
exclamation mark. But what else have we got here? Have a look, Ram
size and CPU course. By default, this
container will have two CPUs and four gig of
RAM. I can amend that. I can amend using Ram size and CPU course arguments.
So let's do that. I will add that to my
Docker compose file. Ram size eight gig, that's fine. CPU course four. That's still twice as much as we had with the
default values. And for Windows 11, yes, I would say that should
be minimum recommended. Let's have a look if
there is anything else that is interesting here. Oh, disk size. Default
size is 64 gig. We can change it using
the disc size value. Let's add that as well. But
maybe not 25, six, maybe 100. 100 gig or volume,
we will again, call it not exped this
time, I will be win 11. This will be the volume on our Ubuntu server in VR folder. Another folder called
win 11 will be created, and it will be bound
to the storage on the container itself. If you wonder, what
is this the KVM? The KVM Virtual machine
is a technology that works in the background and it lets all of that happen. I mean, the KVM Virtual machine is passed through to
this Docker container, and that is really how these
windows is able to run on the Linux instance
at all because you can't just install
windows on top of Linux. Need some type of
virtual machine, and KVM is a native built in Linux solution
to do just that. So it's basically a
Linux container that runs Windows Virtual
machine inside it. This is the entire secret to how it works in
the first place. The last thing I wanted to
talk about are these ports. We know port 8006 already. This is port for VNC, and this is how we can
kind of have a peek at the ISO download and installation process because we can run it in the browser. But there are two more ports that are passed through
to our container. And in fact, 3389 is a
port for RDP connection. That means we can actually
RDP to our instance, which is much better
because this VNC port, you can see the graphic
is very poor because this is just like a browser
like connection, and the graphic
isn't that great. And even if we checked
that Window six P, it was like blurry and
not really clear phoned. But we can RDP to our instances, which will improve the quality, and we will feel more
like as if we were natively sitting in
front of that desktop. Okay, so let's try all of that. First, let's save the file, escape, call on WQ. Now what we need is sudo docker
compose F. But this time, it's Windows 11 dot
Yao. And the word up. Enter, and let's have a
look what's going on. We'll go through the
ASO download process. And installation process. We can just go for a coffee. It took a while, but we
have welcome screen now, but what changed now, we had a user Mark. So this one took definitely
longer for Windows XP. But first thing,
let's try to RDP to this instance because you can
see this is poor quality. The phones are blurry and VNC
is not really what we need. So this is the RDP
client from my MacOS. I can add PC here, and the IP address
is 192-168-1204. I remember that because
the IP address, the last digit is the same always as my host as my UbuTHst, which is two oh four as well. User count ask when
required, that's fine. Let's just add it,
and let's connect. Now I can use user
name and password that I passed through in my
Docker compose file, which was Marek and pass
1234 exclamation mark. Super secure, continue.
Wow, that's big. But now you can clearly see the difference
in the quality. Now I'm RDPD to my instance, which means if I
go here, the VNC is now logged off because I
can have only one session, and my current
session is this one. It's RDP from my MAC. So let me disconnect. And let's just close it. We can see RDP works as expected. I can get back to my
VNC session if I need. All right, so we know
that I can simply control C here to shut
down that windows. I can obviously also click here, sorry, and just
shut it down here. I shut down here, we
will see in the terminal that this windows instance
has been shut down. And there it is shut
down, completed. But how do I start it up
again? That's very easy. We can use commander Compose F, then our Docker
compose file name. And this time, not up but start. So just start. Oops, sorry. Not that sudo. You have to be a root
user to run that command. Sorry, Sudo, Docker Compose
F. Let's run it again. Let's sort. Now it should work. Starting Windows done. That means I can
connect to it again. Great. Let's go back here. What I can do now, I can also
do Looker compose, stop. This will also stop windows. You can see there are many
ways you can start and stop your Windows instance or
container, I should say. Let's go to the
var folder again. So this is the win 11 folder that has been created for us. Let's go there and
see what's inside. You can see we have
the image and we have all the files that are needed
to run this container. And if you check the size of it, we can see that image
location is 100 gig. By default, it was 64 gig for Window six P. We
haven't checked that, I know, but by default, 64 gig is allocated
for any instance, but we changed it in our Docker compose file to make
it slightly bigger. Everything works
as expected, then. So you can see how
easy it is to change just one or two things
now in our Docker file. And run completely different
version of Windows. Or you can create
multiple Docker compose files and run multiple versions at the
same time if you want. It's neat, quick
and easy solution, so I can definitely
recommend it. No, this is not
sponsored in any way. This is just my
personal opinion. I also like the fact
that it runs within KVM because I know that the
underlying technologies like security enhanced Linux and secure virtualization
will keep that instance secure and completely
isolated from anything else that I run
on my Ubuntu server. L's, check this out.
Using this method, I don't have to think
about KVM at all. It's barely visible here. If you ever configured
something in KVM, you know it's not that straightforward to prepare
KVM for Windows installation, and there is a few bits and bobs that you have
to configure first. Here, all the process is
automated from start to the end. So I hope you like it too, and thank you for watching.
9. Bind mount NAS (CIFS/SMB/NFS) shares to Unprivileged LXC Proxmox container: This is pretty common
problem you might encounter if you run
a Proxmox server. You installed Open Media volt, TrunAS Android or maybe
other network attached storage solution as a virtual
machine on your Proxmox. Then you created shared folder, you enabled Samba or NFS, and you can keep and access all your files over
your home network. In my case, as you can see, I run Open Media volt
and I can log onto that. This is its IP address. I usually match the end two
oh two to the container ID, as you can see, it's
easier to remember. And the shared folder, I created mini PC, not the capital M, capital P and C. I also enabled Samba protocol,
which you can see here. By the way, that guest
Aloud setting allows you to access that shared folder as either a user or
a guest as well. Because I've got a user created. It's called SMB user, but you will be able to
access these files, sorry, this shared folder, I mean, minipC as user or as guest. With that, I can access it from any location in my home network. So then I felt, Hey, I can also keep all my pictures, movies, TV shows,
music, et cetera. I can keep everything
in that shared folder, then install Plex or Jifin on the Px Mox as
my media server, so I can watch all
of that on my TV or on my phone or any other
device in my home network. So that's exactly what I did. I installed in my case, it's Jifin and it's running
as unprivileged container. By the way, that is the way
you should have it installed. And only then you
installed it all and you realize when you try
to add media library, you then realize
this media server has no idea where to find
your network folder. It doesn't matter what you put
here. Believe me, I tried. It will not work for
unprivileged containers. This is simply the limitation. You can't choose
anything here either, because these are local folders, local on the container on the
Jifin container in my case, or it might be plex container in your case, but
they are not here. My movies and shows are on open media vault shared folder. So the solution to that
isn't that complicated. With just few commands,
we can make it work. And to fix it, we
have to go back to Proxm to the node itself, in my case, it's called PVE, and then we have to use
that shell utility. Px Mx will act as a man
in the middle for us. We will have to mount here
our network location first. In our case, it's
open media volt, the mini PC folder
on open media volt, and then we will pass it
to our giffin container. So first, we have to create some local folder so we can
use it as a mount point. I will maybe go to CD MT. That's usually a good start. I can see it's empty. I will create a new folder
called I don't know, mini PC, but all lower case
maybe. Just to distinguish. This was the capital
M and Capital PC on the open media volt
and it will be all lowercase here
locally on Proxmox. Then I have to install common Internet file system utility, and I can do that by running
up get install CIFS UTS. Next, it's time to mount our network location to our
local location on Proxmox. So I can use command Mount T, then that CIFS tool,
then lowercase O. Then we have to
specify the user on the NAS location on the
network attached storage. So remember, For me, it's SMB user, but also remember that I can access
it as a guest as well. So if I go back, I can
use user equals SMB user. But what I could also do
is just log on as a guest. So that would work
as well. But I will change it back to SMB user. Now, the remote location, which is forward slash
forward slash IP address of my open Media vault and then the shared folder on open Media volt, it
was capital letter. This is my shared folder. This
is what I'm referring to. Let's go back to our command. And now, what folder I want
to mount it locally here on the Proxmox I want to use that minipC folder I've just
created in Mount folder. So the path is M&T, Mini PC. That's here locally
on the Proxmox. Now just press Enter
and it asks us for password for that Samba
user on open Media volt. So I will type it in,
and that's the job done. Let's have a look first.
This is my mini PC folder. If I see Dir, now if I run AI, I should be able to see
its content, and I can. That's basically
the same as what we had here in my Ubuntu
server. That's it. You can see I have
Jerry fin folder already created with
movies and shows, so we can go further even And these ones with dot
are hidden folders or files, so I can't see
them, for example, here in ubuntu in default view, I can only see the ones
that have no.in front. But it doesn't really matter.
It works as expected, now I just need one more
command and the command is PCT. Maybe let me clear it first. The command is Proxmox
container toolkit, which in short is PCT, then it's set then the
destination container ID. In my case, it's geri fine. So that ID is two oh
three for me in my case, two oh three, obviously
might be different for you. And then mount
point, it's MP zero. Zero if it's your first
mount point. For me, it is. So I just use mount 0.0. Now what we do we have
to provide location of what folder we want to kind of share with
that container, and for us is MMT
forward Mini PC. Remember, that's the
one that I've just created here on Px Mx locally, and then comma, P
equals forward slash. Where do you want to place it on your container on
Jolly fin container? Oh, nice. It disappeared one. I want to place it under shared
folder on the container. And now I click Enter, take a while it was
quick, actually. Now, if I go to my
Jellyfin container under shared folder
on that container, I will find all the content that is locally here on Mount MiniPC. Whatever I find here
on my Px Mx in MiniPC, I will find it also there. But remember that this folder, its content, it actually
comes from Open Media volt. So it's a bit
complicated, I know. Now, one more thing
I would do is go to eryfin and
give it a reboot, so it picks up all the
changes correctly. I can see CPU goes up,
so it should be up. Let's go here to the dashboard, Libraries can now add library
content, maybe shows first. Folders. Now, we're interested
in local shared folder. If I scroll down, you've got
shared folder and it's local here on the jellyfinw because Px Mx passed that information
to this container. As you can see, I can
see the jellyfin folder and I can see shows
folder as well. I can add it correctly now. The fact is this folder
is empty right now, but if I had anything there, it would show me all the shells that I have available there. But that wasn't
the point. I hope that helps, thank
you for watching.
10. Auto bind mount NFS/SMB/CIFS share to Proxmox LXC container after reboot: There was a video
about Proxmox I released recently
about accessing folders on virtual machines by unprivileged lex containers. Specifically in that material, we had Giffin container
accessing media folder on open media volt. But that solution would work for any other Lexy container, accessing any other virtual
machine on that box. You guys like that video. However, many of you said that this solution that was
presented does not survive the Proxmox server
reboot and you have to type those commands manually
again after that reboot. Many of you asked if it is possible to automate that task. And the answer is
yes, of course, Proxmx is a Linux
based solution, so we can do anything we want and there is at least
million ways to do that. Let's start automating it then. In fact, that may also show you the way to automate
any other tasks, not only this specific one, because we are going to use Bs script combined
with a Crone job, which you can later on amend
any way you wish to perform any other tasks by simply expanding or amending
that BS script. Let's start from what
we currently have here. But I don't want to repeat all the stuff we did in
that previous video, but I just wanted to quickly
recap what we did there, so we are on the same page. I've got basically here, we've got eryfin container, and here we've got Open
Media volt Virtual Machine. You can ignore that ubuntu because that's not used
for these purposes. So if I log on to
Open Media volt, What we have here,
I've got the user. The user is called SMB user, and we've got
services configured sample services with a shared
folder called Mini PC. But please note capital M, capital P and C. And this
folder also has guests allowed. That means I can
access it either as a guest or as SMB user. So what we did next, we went
to Proxmox to the node, PVE. We created a folder
inside M&T folder called minipC so
this is the folder, and then we mounted
it to that location, so to this IP address, and we mounted this
folder on this location, the command we used
was mount CIFS. The user can be either guest
or SMB user, as I said, forward forward
slash 192-168-1202, which is IP address of
the open media volt, and then forward
slash Mini PC with M and capital PC exactly
as open media volt, and we mounted that folder to this local folder on Proxmox
which is M&T Mini PC. If I do it now, I
will get error. Well, password is blank. Actually didn't
display anything, but this folder is
already mounted. So if I go there,
see the mini PC, if I do LS LA, I will see all the folders
that are inside there, my movies and my shows. So my Proxmx the Proxmx itself, can now see the folders that
are inside Open Media vault. But what we did next, we had to pass that information further to gifin container. What we did was a
command called PCT, and then set container
ID of elifin which is two oh three
in my case, MP zero, forward slash M&T,
forward slash mini PC, and then we specified the
mount point as it will be seen on the eifin and it
was forward slash shared. So that's all we did
in the previous video. I don't know what happens if
I run this command again. Probably error. Okay.
It doesn't error, but we can go to
Jellyfin and we can see it's already
there in resources, I believe. Yes, that's
the mount point. You can see this mount point. MMT minipC which is location on Proxmox is already mounted
as shared folder on y fin. But please remember
that Mount minipC is actually mounted again
to our open media volt. The data is going from
open media volt to Proxmox and then passed
further to Jellyf. That's fine. That's all cool. We know it works. If I go to my lyfin, you can see it works. I can see those movies. It's actually just
one video there. But if I go to Dashboard, if I wanted to add
library content type, let's say movies,
I can add folder, which is called I don't
know where it's twice, but it's called shared folder. Within that folder, I can
see those movies and shows. We know it works. What's
the problem then? When we reboot the Proxmox, the problem is mainly
that our Proxmox is not able to mount folder on Open Media volt
because at this stage, open Media volt will
not be up and running. So there is nothing
for it to mount. And therefore,
there is nothing to pass further to geyfin. And you can't use FA Stab file either for the same reason. If the open media volt is down, there is
nothing you can do. So we have to start the open
media volt after reboot, then mount the folder to Px
Mx and only then pass it to Geryfin and recreate
that old path that we did in the
previous video, you know. So before we do anything, let's just do the reboot and
see what we have missing. So just hard reboot, you know, everything is running. I don't know what
will be broken. Never mind. Let's see. Oh,
actually, it's not that easy. I don't think those VMs. Okay, it stopped the
jellyfin container, but it might struggle to stop automatically open
medivolt and Ubuntu. So okay, let me close this. Let me go there. Yes, this
is still up and running. So let's shut down
here, maybe. All right. And the remaining bit is Ubuntu. Let me consul to that because
that's Ubuntu server. I don't have any GE. And we do shut down
now. Okay, nice. So this one should be shut
down shortly as well, and then tipxmx will reboot. Sorry, I didn't think of it.
Oh, there was a little beep, server rebooting. Not
sure if you heard that. So if I refresh it, it shortly should be
back up and running, and we will see what is missing. Server is up, not entirely yet, but should get green shortly. Okay, server is green, but I guess it will not
start anything on its own. So let's see what our
script has to do first. I would say first, we would have to start open Media volt. That should be our start point. So in GUI I would just
click that start button, but we can't do it now. We have to build our
script, and in script, we need command line
instruction for Proxmx to start that
particular virtual machine. The way, this GI doesn't
do anything really. When you click Start, you basically generate
that CLI command in the background
to start the VM. We have to figure out what is that command that this
button generates. So let's make some notes maybe. Let's open Text file, what we have to
do. Start the VM. So let's Google ProxmxH
to start Virtual Machine. Command line. Okay. That's good. Those top links look okay. So let's click on any
of them. Let's see. QM list will show us all virtual machines and how to start QM start and then
Virtual machine ID. That's perfect.
That's what we need. Let's go back here. Text editor. Maybe let's test first. Let's go to the node to the shell, and it was QM list. It lists all our
virtual machines, and two oh two is the one
we are interested in. So it was QM start, two oh two and crecener. Perfect. Got it already up. It's already up and green. So the command we need
is QM start two oh two. What do we need next then? Before we go any further, I would say we would
have to have some kind of confirmation that
this virtual machine is up and running because you saw it took a while and we don't really know how long it takes to start this virtual
machine for Proxmox. So there must be a command that is able to check this status. So let's Google again.
What do we Google? Maybe Proxmx how to check whether the VM is running,
something like that. We can see directly from Px Mx. There are some instructions. And we can see we
can do QM status and then virtual machine
ID. Let's do that. QM status. Two oh two. Status is running.
So that's cool. But for our script, I'm only interested
in this bit, running. I don't want this call on
the status, blah, blah. So let's use Ok command to
grab just this information. So it's QM status two oh
two, previous command, and then we can pipe it to OC and we only want to
print the second bit. Oh, that's better.
That's what I need. Running. That's what
I'm interested in. Let's copy that command. This is exactly what we need
as a confirmation. That's perfect. What
we need next then. Once the VM is up and running, now it's the stage where we can mount it to the
folder on the Proxmx. The folder is still there.
You can see minipC. But if you go there,
you can see it's empty because we haven't mounted what's on
open media volt. For our script, next step
will be to mount that folder and it was mount CIFS O user. Maybe for this purpose, we will use that Samba user,
the other user, but I will have to specify the password because this user
has the password as well. So it's just colon with no
spaces, and then the password. Super secure puzzle. It's just for this
purpose. I changed it. And then the location of remote folder and
location of local folder. If I do the LA LA now,
that's the thing. 1 second. Let's go back. Okay. You know what? You know why it didn't
work because I was actually in that folder
in the mini piece folder. So it was used by me. That's why it looked like
as if I didn't do anything. So I had to go back, go
further to any other folder, and I didn't really have
to re run that command. It was mounted. I just had
to refresh that information. But as you can see, it
works as expected. Thing. All right, but it
doesn't matter. We can see it worked.
So what's next? Well, that PCT command. Remember, passing
this information now from mini PC further
to the jellyfin. But if we go to jifin, we can see this mount
point is actually there. This information is not
missing after the reboot, so we don't really need
to run that PCT command. And the boot doesn't remove that information, so
we are okay with that. So basically, the
last thing we have to do is to start jifin. But how do we start jifin? It's not a VM. This
is a container. So what we do, we Google. We Google something like Px Mx, how to start LAX container. And maybe CLI,
something like that. Again, first from the top, so Y PCT fails, but Alex C works. So it looks like this works. It looks like Alex start
and then the container ID, but with N. Perfect.
Let's try it. Let's go to node again, LexC start, W it N, and the ID of the container. In my case, it's two oh three. Effin is container ID two
oh three. Let's enter. Okay. The container is now up
and I forgot to make notes. Let's copy this command. I just stopped doing our notes. So that was to start container. But we've got the
mount command missing, so we'll copy from history. I believe that's it. So how do we make script
out of this thing? Let's write it somewhere
on Proxmox as bar script. So let's go to root folder.
What have we got here? We can choose maybe optional
stuff. That might be. Doesn't really matter, but
let's go there to see the Opt. Is something there? No, nothing. So let's create folder
maybe called scripts. Maybe we will have
more scripts later on. Let's go to that folder.
How do we call it? Maybe mount dotsH. DotsH means it's a bash script. But for Linux, it
doesn't matter. It's more information for us. If you do Bar Script,
if you do LSLA, you will see this file
is not executable, which means we cannot
run it as a program. First thing we have
to do is to change, add executable bit to that file, which means we have to
run CH mode plus X, and then the name of the file. If we do the same
command again, LSI, now, we can see that little X, which means the file is
executable. That's what we need. Let's clear that maybe,
we've got the file yes, we created the file, MuntH
but it's currently empty, but it's executable file. Let's do nano mount
age to edit the file. And now we can do
what's called Shibang. This is basically an instruction for system how to deal
with this type of file. So we do user Bin
ENV space Bash. It's a bit dark font,
but you know what? Never mind. So this
is our first line. That's how we start every
bar script. Click Enter. And what we had next, we
had QM start two oh two. So that's what we do.
QM start two oh two. That's our first line.
And then it was what? QM status, QM status, two oh two to check the status of our virtual
machine if it's up and running. And then we were interested
only in that second part, which was the word running. The little problem here, the program does start to oh two and then
status two oh two. The output will be different. The virtual machine will not
be running at this stage. If we run the status immediately
after we start the VM, the status will be different.
It won't be running. So what we have to do instead, we have to wait until this output of the status
command is actually running. So what I mean, we embrace
that entire stuff. So it's like dollar bracket. We embrace that entire thing
because this is a command, and we then do equals
equals running. And we have to use
brackets here as well, just to make sure it's a string. So basically, we run that command and then we
treat it as a string, and this string has
to equal to running. We need the output
as running simply. The left side has to match the right side. So
this is a test. In Bash, it's a test, and all tests you do in
square brackets like that. So what I now want to say is, if I go to beginning
until this is true, it's a semiclm at the end, all that line has to be true. If it's true, that's fine. But if not, do slip to done. I know it might be difficult,
but what we do here, we simply say run this command, and if the status is
running, that's fine. We're done. But if not, if it's not true, then
slip for 2 seconds. In other words, wait for 2 seconds and
repeat that command again and keep doing
that until this is true. I know it's a bit complicated, but it is what it is. But next, let's do the mount. Mount T CIFS user
was Samba user. And the location of remote
folder and Local folder. Let's have a look. Q and status. Okay, mount, and now we just
start the elifin container, which has ID of two oh three. So we just start that.
That's our bar script. And one sec. I will
add two things. I will actually do slip
maybe 20 seconds here and also the I don't want all of that stuff
run at the same time. I mean, it won't run at
the same time anyways, but I want to give it some time so we can see actually
what happens. First, we should have open
media volt up and running. Then we should have
the folder mounted, and then after
another 20 seconds, we should see the Jerry
fin coming up so we can see actually the progress
as we look at that output. But as I said, you
can fiddle with this script later on.
Doesn't really matter. To save it, we press Control O, then Enter and Control X. And we can check what's inside that file using cut command. Cut mount, that's
our bar script. That's fine, but next question, how do I run this script when the server reboots? Because
that's what we need. We want to run all
those commands every time a entire Proxmox
server reboots. So we can do that
using Crone tab. And Cron tab, you can
write like Crone tab. L, it lists you current jobs, and currently there is nothing running because we
can see that hash. It means this is
actually commented out, so this is just for
our information, but there are no Krones running. We have to create
new ron that will run at every reboot.
So let's do that. We use Crone tab. E command
to edit the cron tab, current cron tab, and then
you can read all that stuff because it explains
actually what it does, for example, here. If you want to run something
every week at 5:00 A.M. That's how you configure in a cron tab. So you can see here. It's a minute, hour, day of month, and day
of the week, et cetera. So if we want to edit it, we just add another line here. But the thing is, I don't
want to use that format because that format means I want to run it
at specific time. If I set something like zero, let's say so that's
at 10:00 P.M. The asterisk asterisk
one at five, that would mean I want to
run something at 10:00 P.M. From Monday to Friday,
only weekdays. That's not what we
want to achieve. We want to run this Cron job every time the server reboots. So I need a special command, and it's called at reboot. That means this line will always run when we reboot the
server, and it starts up. Okay, cool. So what do
we want to run then? We want to run Opt scripts. And then we called
it mount dotsH. That means we want to run that script every time
a server reboots. But then for cron tab, you need a little bit
more information. I mean, you don't always need, but to be honest sex, you know, I learn it the hard way. So first, you specify what you want to use
to run that script. And we want to use
user been Bash. We want to explicitly tell it
that this is a bar script. And then what I usually do, I redirect the Dev. I mean, one can let
me write it down. You don't have to
care much about it, but this basically
means we redirect the standard output
to Dev null device, which basically discards it. And standard output is anytime
you run some commands, it generates some output. We are saying that we are
not interested in this, and this line says it redirects the standard error
to standard output. But because we discard
standard output, that means all the information
that is generated or all the messages that are generated when we run the
script are simply discarded. We are not interested
in them. And then there is one more
thing very important, I would say, I specify the
path environment variable. A what it does, it says to Chrome
where it can look for the executable files or for the binaries,
they are called. There are various
locations, and by default, Cron tap chron is usually
useless with that. It's always better to specify all those separate locations because colon divides
every location. So this location is different
than that, et cetera. So believe me, it's better to have it
than to not have it. That's quite a lot,
I know. So what we have to do we have
to press Escape, column W, Q, Enter. And we can see Cron tap
installing new Cron tap. So if we now do Cron tap L, it will show us our Crown job. Now we are ready for the reboot and see if
it works as expected. Okay, I know already I have to stop manually open Media volt. Let's just open it again. Let's log on, and let's power it down because reboot will
not do that for us. All right. Let's wait
until it's actually down. Okay, it's down now, the
moment of truth. Reboot. Let's see what happens.
Connection closed. If I refresh. Okay, it's still
stopping the griffin. Yeah. Shut down container, okay? Oh, all right. Now it's gone. So it's rebooting. Little beep. Not sure if you heard it.
That means it's coming back up. Let's see. First thing we should see is the open media
volt coming up, which is two oh two ID. Oh, it's already
up. That was quick. We will wait for eifin now. We can also check system
CTL status, Crone, I think. Oh, yeah. So Cron is running, and it also shows us the
output of most recent logs. We could see starting task
PVE. That's for two oh two. That means for open media volt. And what's going on?
What with eyfin? Tatus stopped. Let's see if
the folder has been mounted. But. And it hasn't. Oh, okay. All right. 1 second.
Screw it up. Let's go to where was it? Option Scripts. Yeah. Oh, can you
see the problem? Stupid. I forgot the
O here. Alright. I should have copy
pasted that instead. Let's do. No, no,
Mount dot a side. That's the thing. Then
Control O, enter Control X. Let's go to open Media
volt, restart it. I mean, stop power it off. Reboot again. Let's wait for it to stop completely. Reboot. Heard a little beep,
so it's rebooting. Yeah, we can't access it yet. Should be back up again shortly. Oh, there it is.
Let's see this time. Successful or not
successful? Open media volt. That's quick. I'm surprised. It's so quick for this
status stop running. Let's see if it
mounted now correctly. Okay, let's go back again. No, it doesn't, you
know what we can do? Sorry. Well, I want to
record it actually, because that's what it
usually looks like. Something is wrong. We
don't know what is wrong. Why doesn't it mount
it? We can do. We can simply run
our script manually, opt scripts and
then it was mountH. Let's run it. Let's
see what happens. Am already running. Mount
come on, not found. Line three, let's control. Set. Let's have a look,
something wrong in line three. One, two, three, I believe
it's just about the spacing. The spaces here at the beginning
and the end. Let's see. Let's do no. So we got one, two, three, third line. It's moaning about
something here, and I believe it's
about the spaces. Control O, Control X. Let's run it again.
That's better already. We can't see anything. That
means it's doing something. Doing something means it sleeps now for 20
seconds, remember? But it doesn't display
that error anymore. So we can wait now. We
can obviously ignore the VM already running
because we know it is. We're waiting for mount and for the jellyfin
to start, really. If they start, right. Okay, that's completed. So let's watch jellyfin. It should come back up.
All right, it's running. That means we should also
have same problem as before. We can see these
folders are now in. One last test. Let's reboot and see if it works
after reboot. Go to open Media volt again. I know it's a lengthy video, but on the other hand, I wanted to show you entire process. Let's reboot again. Open media
volt down. So yes, reboot. Let's refresh.
Alright, we're back. So PV should become
green shortly, and it is together
with open media volt. And now I'm pretty sure
it will work as expected. Let's wait for Jellyfin so
it's also nice and green. We had those slips. So let's check the
mount instead. And it is there. Look at
that. Worked as expected. So let's just wait remaining 20 seconds for lyfin to come up. And it is up and running. Let's open Jellyfin then. And I can already see my movies. So if you go there
as the last time, we want to go here to libraries, if you want to add library, we will see, I
mean, movies, yes. And the folder, we can
see that shared folder, and we can see movies and shows. So yes, to be honest, it took a bit longer
than expected, but I hope you like it. Well, one last thing. As you can see, for example, these two oh four,
it's my Ubuntu, but it's still down because it's not included in our
bash script, yes. So what I can do, for example, now, can go to PV, I can go to my prep, and I can do nano mount SH
and QM start two oh two. That's already there. I can add QM start two oh four as well. So next time I reboot it, my Ubuntu will be up
and running as well. I will do Control
O, enter Control X. All right, so that's all
I wanted to say today. I hope that all makes sense, and thank you for watching.
11. Read AND WRITE from unprivileged LXC container: In one of the previous videos, we mounted a shared folder from Open Media volt
virtual machine on Proxmox and then we
passed it further to if unprivileged
Alexy container. Both the VM and the Alexy container were
running on Proxmox. But many of you have
noticed that you can only read from that remote
location on open Media volt. I mean, our unprivileged Alexy
container, in that case, was eifin it cannot write
to that remote location. Today, I will show you why
is that and how to change that default behavior so that Alex container can write to VM. In fact, it doesn't matter
what virtual machine it is. It doesn't have to
be open media volt. It can be truens unrated, or maybe completely different
VM that you have there. It doesn't even matter what
Alex container you have. The method I'm going to
show can be applied to any VM and any LX container that you might have
on your server. We will also understand why privileged container can write
to that remote location, while unprivileged container
cannot, by default, do that. And we will see what we
have to change to get that write permissions for
unprivileged LLC container. For this video, I use
the open media volt again just because I already
have it up and running, and my shared folder is
called open Media volt. I'm in the storage,
shared folders, and you can see name
open media volt, relative path open media
volt. We will need it later. Note that when you
add shared folder, you have to set correct
permissions as well. These are mine,
the default ones. Admin can read and write, user can read and write, and everybody else can read
only, so I didn't change. So this open media volt has the default settings.
They also have a user. I created one user, and
it's called SMB User, and we will use it to
connect to this share. Basically, this SMB
user will have that read and write permissions
to that shared folder. And first, let's go
back to Proxmox and mount this Samba share
to the Proxmox itself. This is my Proxmox.
I will go to PVE. You can see two oh two is
my open media volt VM. I've got also some Ubuntu, but we can ignore them. What I need now is PVE
and I go to Shell. In this shell, I will
create it maybe in, let's go to CD MMT. If we LSL, you can
see it's empty. I will create a folder
here called minipC. I believe that's what
we used the last time. So I use Make their
command to create a new folder, we call it minipC. So if I do LSL now, that is my empty folder. And to mount that Sabah, I need a utility called CIFS, so I have to install it first. And the command is AG,
install CIFS Utils. In my case, you can see it's
already the newest version. I've got it installed, but you probably will have
to run this command. So now we can mount
that remote folder, that open media volt folder to the local M&T forward
slash Mini PC folder. The command I need
is mount CIFS. Then I use O. This is to provide
additional information like users passwords, like group IDs that we
will learn about later. So my user is SMB user. Now I have to specify
that remote location. So what's the IP address of open Media volt and what's
the folder I want to mount? So if I go back to
open Media volt, if I go to network interfaces, I changed my IP address
to 192-168-1202. And the last digit two oh two will match my
virtual machine ID. That's what I always
do, like 202. I know if the VMID is 202, then the IP address is
192 1681 dot two oh two. So that's what I
use in my command. 16812 oh two, and then
the name of the folder, and the name of the folder
is here in storage, shared folders, and they
called it open media volt. I go back and they
say open media volt. Now what I want to mount
it here locally on. They use the folder
that I've just created. So it's MMT Mini PC. Now, the password,
I configured for SMB user, and that's it. If I now run Mount command, I will see here at the
bottom that mount. And I want you to inspect that all output because we've
got user name SMB user, but not that user
ID and group ID. Currently, it's user ID zero, group ID zero. Why is that? Because I mounted that remote
location as user root here. I'm still user root on this
Px Mx on the PVE node. If I run command ID,
that's what I am. Root has user ID zero
and group ID zero. That's why it has
been passed like that to the mount command. I hope that makes sense. And if I clear that
maybe up arrow, I simply bind mouth
that remote location from that IP and that folder, I mount it to Local
folder M&T minipC. That means if I control C, if I go to that M&T MiniPC, I should see the
content whatever I have on open Media volt
on that virtual machine. So let's do that. LSL,
that's my folder. If I see D to minipC LSL, I can see Mark one TXT. That's, in fact, the
only file I've got there on my open media
volt shared folder. So this content comes
from Open Media volt. It's not here locally
on the Proxmox. Whatever is on open Media volt, I will be able to
see by going to this location, M&T Mini PC. Important thing at this stage, we should be able to
read and write to remote location from the
Proxmox itself, I mean, because this user
specified here, user S&B user should have
read and write permissions. So if I, for example, touch a new file
called Marek two, I should be able to do that. Now if I do LSL,
I've got two files. Maybe to make it even clearer, if I go to another machine, like this is my MacBook, and you can see,
I also connect to the same open media
volt folder from here as Sbuser SMB user. If I go to that folder, I indeed can see
both of those files, which only proves
that I am able to write to that remote location
from the Proxmx itself. You have to make sure
it works as expected, because without it, there is no point even to go any further. I also want you to notice that this mini PC folder
belongs to root, I mean, root user
and root group ID. Because that will
change later on. And the ID of root
user is always zero. User ID zero and group ID zero. Okay, so far, nothing
looks strange. All works as expected. So we can try to
pass it further, this location to our unprivileged compainer
to see what happens. And I don't have a copainer yet, so let me quickly create one. So I click here on Create
City means create container. I will give it ID
of two oh three. The host name, I will
call it Mark LLC. I will create the password
for the root user. I'll click next.
Now the template. I've got one template available, but any template will do. I just want to say,
there is no such thing like a elfin Alex container. It's basically one
of those templates with elfin installed
on top of that. Or you can install
anything else you want. You can have multiple
you can have all of your programs on one
single Lx container. But that doesn't really
matter. It's not our topic, so we click next. Disks eight gig is fine. It's just for the
testing purposes. Click next. CPO maybe two. Next, memory, the main
memory I will put 4,096, the swap can stay as it is. Next, Network, I will put
static IP 192-168-1203. As I said, I am
matching the ID of the either VM or LACI container, so it's easier to remember
what IP it is in 24, Gateway 192, 168, one, one. That's my home router. So I click Next, DNS, leave it as it is
next and finish. That's it. That's okay.
So that's completed. I should see I already see it. The name has just
changed to Mark LACC, so I can now start my container. Took a few seconds, and the
container is up and running. So I just double click on
it and I can log onto it. I use root and the
password I've just provided during the container creation.
That's my container. Let me see D to the root folder, S L. These are all the
folders available here. What we did in the last
video on the Px Mx, if I go back to Px Mx, we run command PCT set, then the ID of the container 23. Then we created
mount point MP Zero. Now, we specify
the local folder. We want to bind mount, which is M&T mini PC, and the mount point on the
container equals shirt. That means in the root location, I will create shared folder, and it will be bound to the M&T mini PC here
on the Proxmox itself. So I just click
Enter, that's it. Now if I go back
to my container, and if I run LSL again,
look what changed. I now have shared folder. If I scroll up, it was
not here previously. So I either can
create it manually and bind it to the Proxmox
host or if it doesn't exist, it will be created for me. So if I go there,
see the shirt, LSL, I can see both of those files that are on open media
volt that are passed to ProxmoxHst and then
from ProxmaxHst are passed further to this container to this unprivileged container. The problem with that is
I cannot write anything. So if I touch Mark three, I've got permissions denied. And some of you might notice
that the PCT command created that shared folder for user
nobody and group no group. And maybe you think that
this is our problem, but believe me or not,
this is not our problem. Even if you changed
it to a root, it would not solve our problem, and we would still not be able to write that
remote location. The issue here is how host
system and in our case, it's Proxmx, but in fact, it can be any Linux distro. The issue is how that Linux handles privileged and
unprivileged containers. In fact, if I change this
container to privileged one, I would be able to write
to that remote location. Why? Because my root user ID and group ID on the container and on the host
would be the same. If I run ID on the container, you can see I am root
here and the user ID and group ID on the container
itself, it's also zero. And for privileged container, it would be seen on
the host system on our Proxmx as coming from user
ID zero and group ID zero. So nothing would change there. That means the
folder permissions would match the user root, and I would be able to read and write to the shared folder. However, we are running here
unprivileged container, and the difference is that even though the container
looks exactly the same, as you can see, I'm still
root on my container. But the request coming
from this container on the host will be translated
to something different. Our Px Mg will add 100,000 value of 100,000 to the user
ID and to the group ID. So effectively on the host for root user on the
container on the host IM as user ID 100,000
and group ID 100,000. This is to prevent so called
container escape hug, and that's why they
are considered unsafe. If I had privileged
container and if I could hug my way out to see other
folders on the host system, then I would basically
gain full control of that host because I would be seen as a root and
root can do anything. It's a privileged user. But because my container
is unprivileged one, even if I escape this
container on host, I would be seen as some random
user with ID of 100,000, so I would not be able
to do much there. That's why we always should use unprivileged containers
whenever we can. But the question is,
what can we change for unprivileged container so we can read and write to
that remote location? As always with Linux, there are many ways we can solve it. But I picked one that I think it's the
easiest to explain. We will simply mount that shared folder from OMV on the host by matching the user
ID and group ID of the container user
and not the host user. Let me show you what I mean. Maybe I will give
you two examples. One for container
user root and the other for container
user or something else. We'll see. We will create one. But let's just delete
what we created. Let's go to the container to the resources. This
is our mount point. So let me just detoch it, yes. And as you can see, doesn't clear we would have to
restart the service. I would just reboot it.
And now log on again. That's it, and that
amount point is now gone. So let's go to the
Px Mx, as well. I mean, sorry to the PV, and let's remove the mount point. So it's U mount and the location is M&T mini BC. That's it. So if I do mount L now, I don't have that
mount weight anymore. So I will clear
that maybe again. Let's just double
check the container. This is our container,
and the root ID is 00. So here on the Proxmx itself, we will do something else now. I will run mount
or you know what? Let me just up arrow.
That was our command. That's what I need,
but I want to change the values
here after that O. I will use the ID of the
container plus 100,000. So basically, you just
run QID equals 100,000. Group ID equals also 100,000.
That's the only change. Now I press Enter,
asks me for password. Password for that Samba user on the open media voltios
and now if I run Mount L, I can see that mount again. But this time the
difference is my user ID is 100,000 and the
group ID is 100,000. What also changed, let
me clear that maybe. If we go to CD M&T, I do LSL, have a look here. Our user ID is 100,000 and
group ID is also 100,000, and the mount command
did that for me. I don't have to run
like change owner or something, Jon command. That's done during
mount process. Okay, so that's the first step. Now, we have to pass it
further to the container. So let's maybe go back
to the container again. If I do CD LSL, maybe we've got already shared. We can see the
permission changed as well to root, but that
doesn't really matter. Let me create maybe some
different folder now. Mike dear, I don't know. OMV root, something like that. LSL, no, sorry. That
doesn't look good. RM RFO. Let's go to root folder to this one because that's the home folder for the root user, and
let's create it here. Make the OMV root. So we are in root folder, and we've got OMV root, sub folder inside, and we will try to bind this one just
to make it different, you. So I go back to Proxmx. Maybe I will use up arrow again. That was the command, and I
could use that shared folder. I just wanted to show
you the difference. I will just change that shard to the one that
we've just created. Root OMV root. Presenter. The mount
point should be created. If I go here, I can see mount point bending
to root OMV root. So if I go to container, let's CD to that OMV root, if I run LSL, I can see the
files from open Media volt. But this time, if I touch, let's say Mark three, TXT, I can create new files. And indeed, if I use that
window from my MacBook, you already see the file was
created on open Media vault. So this time, I
can not only read, but I can also write to that remote location.
Let me run SL. You can see now three files, and that's how it's
configured for root user on the container and how we pass it on the Proxmx for
that root user. But what if I have different
user here on the container? Maybe I installed
application, maybe Enginet, maybe eifine, whatever, and
it doesn't use root user. It uses some different user. Let's do it now. And you will see that it's not that
complicated really. We just follow the same guide. Let me see the but maybe let's destroy
again everything, right? I will just detach this. Yes. I will reboot the container to fully
get rid of that. I will also go to Px
Mx again to the PV, and I will unmount
the M&T minipC. I it's CD to M&T we can see that miniPC belongs to
root and group root again. Okay, let's jump back
to the container again. Let's create a user. I will run sudo a user. Sorry I don't need sudo because I'm already route
but never mind, and I will call it Mark. I will create a password. I will switch to
that user, a Marek. Now, I'm not route anymore. I'm user Mark. Let
me clear that. If I run ID, my ID is
1,000 for the user, and the group ID is also 1,000. That makes sense because that's the first user that has been manually created
on that container, and the Linux numbers those
users starting from 1,000. If I run PWD, print
working directory, you can see I also have my home directory
created, home Marek. Maybe I will create
another folder here. I will make the OMV
Mark this time. The full path is home Mac
OMV Marek to this folder. Now, let's go back to the Px Mx, and now on Proxmx, I will use up arrow
again because I'm lazy. So user Marek has user ID 1,000. I have to add another
hundred thousand to match this user ID on
the host on the ProxmoxHst, which means I have to put here 101,000 for both user
ID and group ID. Now we press Enter,
password for Samba user. That's it. If I run Mount
L, I can see it again. But this time for another
user ID and group ID. That's clear. If I go to no,
sorry, I'm already here. So if I run LSL, this time my mini PC folder has user ID 101,000 and
group ID 101,000. Now, the only thing
we have to do is pass it further
to the container. We have mount point from the Proxmox to the
open media volt, but the missing bit is from the Proxmox itself
to the container. So we create that
using PCT command. I will paro again.
But this time, the mount point on the container
was what was it at home, Marek, OMV Marek.
That was the folder. I press center, and that's it. Let's go back to the container. If I run LSL, nothing changed. I I see D to OMV Marek, LSL, I can see the content
of the open media volt. But if I touch Newfle Mark four, TXT, I can write to it as well. So LSL, I have now
write permissions. So yeah, I hope
that makes sense. That solves our issue. But there is maybe one more
thing worth mentioning. This solution is not only
for Proxmox because you can create virtual machines and LAC containers on any
Linux distribution. Proxmox only provides
that nice user interface, but it does not implement
anything new really. All functions here
are already included in Linux distro that
Proxmox is running. Basically, you can
run those methods on any Linux distribution. That's what I mean.
It's not limited to Proxmox which runs
on Debian anyway. And yeah, we can use Px Mx graphical user interface to create containers, et cetera, but there is nothing
stopping you from using the simple Linux command lines for all those tasks
that we performed here. Because, for example, you can create container
clicking this button, but you can also
use what if it on LCC info and the name of my container, which
is two oh three. You know, this is Linux command. You can run on any Linux distro. And you can download
the template. You can create your containers, and you can do everything you want using just
Linux command line. You can see my containing
liners up and running. You can even see the
IP address, et cetera. And then what we have
the mount command. The mount command is already
like Linux command line, so no need to explain that. But if you wonder what that PCT set two oh three command does, because it looks like Proxmox container tools or
whatever it's called, but all it does
really is it simply adds one line to the
configuration of my container, and the configuration
of my container can be found in IC PVE LCC. So if I run LSL here, you can see this is
the configuration of my container two oh three, and if I cut it, all
that PCT command does, it simply adds this line and
then restarts the service. You notice that I rebooted
the container instead, but that's only because
they are so quick to reboot and it does the same job. But I could manually
just add that line, reboot the container, and I would end up with
the same status. So, that's all I
wanted to say today. I hope that all makes sense
and thank you for watching.
12. Deploy ARR stack with qBittorrent and Jellyfin using just 1 command!: How long do you think
it might take to deploy R apps like Prolar,
sonar, radar, lider, or Homer, and then add qubit turned client to that and Jiffy media server
on top of that. Some of you might think
that it might take hours. But with the method
I'm going to present, you will only need one command, and it only takes 5 seconds
to have it all deployed. You can have it deployed on any operating system where
Docker can be installed, which means nearly
any operating system really because on Linux, you can have it
installed natively, the Docker on Windows or MacOS, you can use tools like Docker Desktop to
have Docker running. A operating system will do.
Let me show you what I mean. I will go to pertainer first, and you don't really
need a pertainer for what we are
going to do today, but I just wanted to show you clearly what is going to
happen in the background. But portainer as such
is not a requirement. Just wanted to show
you that we have only per container running and then that RMBG the remove
background app that we were working on in
one of the previous videos. But as you can see, there
is no QbitorenGfin, or any of the RSC
applications running. So that's my whole point.
Let's open the terminal then. The command I need is sudo, Docker compose, up, D. Let me show you what
happens. Click Enter. Oh, password. That's it. Job done. Well, it didn't
even take 5 seconds. It took like three or
4 seconds, I guess. So let's go back to pertainer and I don't know why
Homer is always late. Like the status
is always shown a little bit later than for
all other containers. But never mind, you can see we have something called R Stack, and all those applications
are part of RSC. So now if I want to go, for example, to QBI Torrent, I can access it on Local host on port 80 80. So I just go there. HTTP. Local host. 80 80. If I log in, this
is my QBtTrrnt. If I want to access
something else, maybe Rader, Ryder is
running on port 787 night. That's the whole
point of this ptaer. I just wanted to
see it graphically. So 7878, that means
I can go here, HTTP. Local host 7878.
This is my Ruder, I can access any
other application. I will not go through them all, but you know what I mean. How does it work? How can
it be deployed so quickly? Some of you might have already guessed that this
command, looking at that, you probably have
guessed that we have a Docker compose file that includes configuration
for all those components. All the configuration for those applications will be
in the Docker compose file. I will share that file
with you so you can have exactly the same solution
applied on your system and we will go through it step by step to understand what it does and what you can change to adjust it to personalize
it to your needs. What I'm going to do now,
I will remove everything, including Docker compose and all the images and I
will start from scratch. I will show you step by step how you can also deploy it this way. Maybe before I do, let me
show you some more commands. For example, here, I can now do stop to stop all
the containers. In the ptainer, they
will be shown as exited. So if I go back, I
can do now again, pseudo Docker compose
RM to remove them. Are you sure? Yes, done. That was not even a second. If I go to portainer,
you can see it cleared. There are no containers running. But I can also go back up
arrow, up arrow up arrow, psudodocer compose
up the D. Again, I can build everything
again in three, 4 seconds. And Homer is late to the
party again. But it works. Just to prove it,
it's 7575 port. So let's go here.
HTTP. Local host. So yeah, you can see
it's up and running, we just have to log onto it. But let me remove everything, as I said, and we will go and build everything
from scratch. Okay, I have removed
everything now. So where would we even start? First, you have to
make sure you've got Docker and Docker Compose
installed on your system. And how you do that obviously
will depend on your system. But for me, I'm on Ubuntu, so I can just run Sudo app
install docker dot IO, then Docker Compose. And I can add Y to auto answer. Ter. And as you can see, I've got it already installed. So I've got Docker Compose already installed,
newest version, and Docker AO also installed newest
version. So that's fine. What we have to do
next is we have to go to the Github
repo that I created. I will paste the link here, but you will find this link also in video description
and in the commans. So you just have to paste it in your browser
and just go there. And these are the
files that we need. I think the easiest way to do is to click on that
green button code. And just download the Zip. This way you don't
need to install anything like Github,
CLI or anything. You simply pick that download Zip and it will be downloaded automatic it took
just a few seconds because it's very simple code. That means it should be
now in my download folder. Well, this is some old crap we don't need anymore,
but never mind. Let me just go there
using my terminal. So you can see YouTube
39 apps, one click. We have to unzip it first
because it's Zip file. Maybe I will make it bigger. So it's unzip and then
the name of the folder. If we run LSL again, you can see I've got
zipped and unzipped. So I have to seed into
that unzipped version. If I run LSL, you can see
Docker Compose and ReadMe file, you have to run LSLA
to see all files because it's a hidden
dot ENV file as well. It's very important for us. And what you have
to do, you have to really follow what's
in read me file. Maybe let me open it here in the browser
because it looks better. So these are the
instructions that I wrote, bear in mind, these are
instructions just for myself. I made them a little bit better, so it's clearer for everybody, but it's not like
professional read me file. I should be good enough. So
I pasted some useful links. Then you have to
download zip files. We already did that and then
this installation process. And before we run that
Docker Compose a command, let's have a look
at the other files. We've got Docker
Compose, for example, and you will see all the
services are configured here. It's a pretty long file, not that long but, you know. It has conflict for
every single service, for every single application. Like, for example,
here, you've got Prolar and in the volumes, you will find a variable. It's called R path. I
will explain what it is. You will see that
every single service will have that variable. If I scroll further to sonar, you can see also path variable. And then the ports and
some other configuration. But I also want you to have a
look at the last two lines. ENV file is dot ENV. It's this third file. So let's click on it, maybe. So in this dot EN V file, you can see that
variable specified, and you can change it
to whatever you want. What it means, all
my R apps will be installed in media
folder in R folder, and it will create sub folders
with the services name. If I go back to Docker Compose, so that's basically it. It will be media,
forward slash R, forward slash prowler, and
then forward slash convic. That will be full path for
this particular volume. So what I mean if you
want to change it, you can change it to
whatever suits you. Then we have user
ID and group ID, and we've got the time zone. User ID and group ID, you
can leave it as it is, or you can change it as well, and the time zone just adjust to whatever where
you live, you know. It will depend on your location. You will also see if you
install this stuff on Windows, this path will look a little bit different
because on the Windows, you've got usually
something like that. You use back slashes,
not forward slashes, and you have to specify the drive like C or
D or E, whatever. Reason I did it that way is every single service will have the same user ID and group ID. Each service will have
the same time zone, and each service
will be installed in the same media forward
slash R folder, which is very
important because we will change the permissions
to that folder. But that's enough. Maybe
these settings are okay for you and
you don't want to change anything.
You don't have to. You just go back to read me, and that's basically all
you have to do is to run sudo Docker Compose D. I will copy it. We'll
go to my terminal. They only have to make
sure that I am in the same location as my
Docker compose file. Then I just paste my command. If I was somewhere else, it's still possible to run it, but you have to do F and
then full path home, whatever, you would
have to specify the path to this
Docker compose file. But because we are already here, we don't need to do that.
Will make it bigger. Now if I run it for
the first time, it will take much longer
because Docker will have to pull all the images for
every single service. Let me show you click Enter. You can see it's downloading now image for the prowler first. And now it's completed, but it took three or 4 minutes, I think, but that will depend on your Internet speed and
some other variables. But what it means,
they're all done now. That means I should see
them in my pertainer. And homer again, starting, but you can see, Oh, what I first see the
stack name changed, but never mind, it
doesn't really matter. You can change the stack name
to R or whatever you want. But the most important is that all those containers are now
up and running very fresh, and the homer is now
shown as healthy. And regarding the
deployment itself, that's basically it
because all it does, it goes through that
Docker compose file. You can see each one
will have the image, and I chose the latest, the newest image for
every single service, but you can adjust that as well. If you want for example, to stick with a
particular version, it's possible to do it
by changing this value. But what I wanted to
show you is that R path. Let's go to read me file again and let's read because
the deployment is done, but I want to show you also the initial configuration
of every single service. Let's scroll a little bit
further. This is what we did. This is if we wanted to stop
the service and remove, but that's not what we
want to do right now. The instruction says, go to folder specified
in dot ENV file. I mean this one media
forward R. Let's go there, see the media R. If I run LSL, you will see all services
and download folder. They are all here
in this location, created at exactly
the same time. So what ReadMe file
says, Redmi file says, I have to change
the permissions to whatever is in that
ENV file as well. 1,000 1,000, I simply have to match this user
ID and group ID, and I have to
assign those values as new owner of this R folder. It might be a bit
confusing, but basically, what we have to do
if I Cd dot dot SL, this is my Rfolder. All I have to do is
sudo change owner recursively because I don't want to only change the Rfolder. I want to change
the permissions for all the subfolders
inside that Rfolder. I want to change owner to 1,000 1,000 for R folder.
Center, that's it. If I run LSL now, you can see it changed
from root to Marek. Well, coincidently Marek,
if we do ID Marek, user Marek on this host on this Ubuntu server
has ID of 1,000. If you log on to the
container itself, you will see it's running
as user ID 1,000, but the user will be ABC
or something like that. It doesn't really matter
what's the name here. What matters, this
value has to match. I think I am overcomplicating
this really. Let's go back to RID me. These permissions
have been changed. That means every single
container will have exactly the same permissions
inside that folder. So now we can configure
the QBItTorrent service. Why? Because it uses
temporary password only. So to configure QBtTorrent, we have to run psudo Docker ps. Let's do that and be clear
that pudo Docker ps. All my containers
are listed here, maybe make it a
little bit wider, bit easier to read, and I need the ID of the QBI torrent image. This is the ID. You
can see container ID, it's this column, so
I need this value. Let me copy it and I need to run psudo Docker logs
and that container ID. Let's do that pudo logs, and I will paste container ID. You can see you can
access QBI torrent by going to this
URL. Let's open it. And the administrator
username is Admin, and the password was not set. Temporary password is
provided for this session, and this is the
password I have to use. So let me copy it. Let's go there, Admin, but the password I will
paste whatever was there. Let's click Login. Don't update because that's not the password that we're
going to use permanently. This is a temporary one only. So let's go back to
the IDM file and you can see now you can
go to tools options, webi so tools, options, WebUI, and this is where I can
create permanent password. I will do it now.
That's my password, and I also click that
Bypass authentication for clients on local host. Then I scroll down
and they save it. So what I can do now, I can log out and I will log in again, typing my new password
I've just created. Now I login and it takes me to the QBI Torrent with
newly created password, permanent password this time. All right. Let's go
back to the IDM file. Anything else for the
QBItTorrent doesn't look like I can now configure
the prowler service. And I'm not going
to explain what every service does because I kind of assume that
you already know, I will only concentrate
on the deployment and initial configuration
of those services, okay? So I will copy that. You can easily Google
what is Prolar for, and you will find out. And there is lots of great
guides already on it. For initial configuration,
I just paste the URL, and every single service
on the first run, it will ask you to configure
user and password. And then it's up to
you if you create the same user and
password for all of them, or if you are lazy like I am, I will have the same user and password for
every single service. But never mind,
it's your choice. Authentication method, you
can choose basic or forms. I usually choose forms, and then the user name Admin, I will leave it as
it is, and password. Whatever you want.
Save. That's done. So Prowler's main job is to have some indexers configured. Let's go back to the read me. It says, Go to settings,
download clients first, Settings download
clients, click plus, and then a Download
client QB Torrent. I already clicked
at Download client, so QB Torrent, and I have to put credentials for Qb torrent. So whatever I configured for QB Torrent, I will
paste it here. And then if you click
that Test button, you will see unable to connect
to qubit torrent because here you have to type the
IP address of the host, not local host, but the host, in my case, it's
my Ubuntu server. So this is the main host, and I can type IP address. And I've got loads
of virtualization, so it's a bit messy. But basically, this is my IP
address of my host machine. So I can copy it and I will
paste it here 192-168-1204. If I do test now, now it
looks fine, so I can save it. And you can see qubit
torrent is now enabled. What's next? Let's go back to RID MiFile and yes,
that's basically it. So we can go to Sonar now. If you click the link, we can go to Sonar and basically
do the same thing. Authentication
method, I will use forms, username and password. I will paste the same again, but you can have
different password for each service and save. Ah, what I accidentally did, I closed that read me
file by opening the Sona. Let me paste the link again. That's the read me
file. We are on Sonar. In Sonar, I go to settings,
media management. And then what I have
to do add root folder and set data TV shows
as my root folder. Add root folder, data, TV shows. Okay. And what I
did here really is if we go back here and if we check our
Docker compose file, if we scroll back to Sonar, I matched this folder. TV shows is a root folder
for Sonar service, and they will differ a
bit because, for example, for radar, it will
be data movies. For IDR it will be
data Music folder. So there is a slight
difference between them, but except of that,
everything is very similar. So go back to Sonar root
folder has been added. Let's go back to Rhythm file. So first step is done. Now I go to settings,
download clients, plus. So again, settings,
download clients, plus, and our download
client is KubitTrrent. And we repeat what
we did previously. Post is 192-16-8124, credentials for QubiTrrent
and I can test it now. And it gives me a
little green tick. If I run again, have a look. Green tick means okay,
so I can save it. And I've got QBID torrent added. There is also that
remote path mappings. I think I mentioned
that in read me file. Yes, in case your QBID torrent and RStck are installed
on different hosts, this is something that you can play with, so it
will still work. But for us, it's not important because we've got it
on the same host. Everything is running as the same stack on the
same host machine. That means I can go
further settings general scroll down for API key. Settings, general scroll down. That's it. API key. I copy it, and what
do I do with it? I have to go to
prowler settings as. Where is my prowler here? Settings, Apps. Under the applications, I
have to click that plus, and we are currently
setting Sonar, so I chose Sooner it
asks me for API key. So I will paste
it. Let's test it. We can see it moans about local
host again because I have to use that IP address of my
host, which is 192-16-8124. Same for prowler. If I test now, now it's all fine.
Green tick, safe. What else shall we do here? Let's go back to Redmi file
settings general switch to Show Advanced Settings
general switch to advance is here,
show advanced. Now you can see more
options, you scroll further, and you have backups
and backups, I have to configure data backup. So let's click that
folder, remove that. Data, and I will choose
backup. That's my folder. Okay? Basically, what
we do here is we are matching in Docker
Compose. Scroll down. We are matching this
folder, data backup. The path on the left
from the column is on the host and on the right from the column is
on the container. And right now we are matching
the path on the container, which is data backup. All right, so let's go back. So we click the safe
changes, and that's it. Let's go back to
the Rhythm file. Sonar is done now radar. But if you read
the instructions, you will see it's exactly
what we did with Sonar. The only difference here will be that your root folder for Sonar, as I said, is data TV shows
and for radar is data movies. Then for lighter and
reader, you will again, have to match this folder to whatever is in
Docker compose. So Lighter is data Music folder, and reader is data books. So I will not go through them. I hope that is clear, and the root folder will actually be the only difference
between them. Okay, let's maybe do
the reader quickly, but this will be the
last one. We can figure. So again, form page,
puzzle. That's it. Next, settings,
media management. Root folder, settings,
medium management. Root folder, data, you will
see it here anyways, movies. We know it's not backups, it's movies. That's cool. Settings download clients, plus QBI Torrent settings
download clients, plus QBI Torrent,
credentials for QBI Torrent. And not local host
but 19216, eight, 924 or whatever your
host IP is test safe. Next, setting general API key
settings general API keys. Go to prowler ad
application, Rader one sec. Don't mix them up because there is radar and there is reader. I'm setting radar right now. API key is that local
host replace with my IP. Test. Safe. Okay, general
advanced data backup. General, show
advanced data backup. Okay, safe changes. We
ignore those three? Well, to be honest, Homer, yes, it's in the stack, but I
never played with Homer. Never had time to have
a look at that really. So it is added, and you can access
it on port 7575. But I don't even know much about it because
I've never used it. But what we have to do
now we have to go back to Prowler and click indexers
at the top right Indexer. Okay, so Prowler
indexers, add indexer. And this is the list of There is loads and
loads of indexers. You can see 627. You have to find ones that work for
you. What can we do? It is. That's the popular one. Test, save. And what else? R Bili. That's another one. Test, green tick, so safe. Okay, can close now. This is something you
have to fiddle with because some of them
might work better, some of them might work worse, depending on your location,
on your needs, et cetera. Okay, so let's go to
the Rhythm file and then click Sync
Up indexers icon. This is a little icon,
sync up indexers. We have to click that All right. Now if you go to settings
ups, so settings. As we can see full sync
for radar and sonar. And that's cool. As you
can see, ASC completed. I mean, not entirely
because you have to go through configuration
for reader and lider, et cetera, but the process is exactly the same
for all of them. And how do you work with it? How do you add movie to radar
or add series to Sooner? Well, if you go to radar, for example, you go to movies. You can see I have
no movies found now because I never searched
for. And you know what? There is a lot of
stuff that you can find using radar and Sonar, but we obviously want
the legal stuff only. So I will go to FireFolks
to Google and let's say films that can be
fully legally downloaded. And you got some redid stuff, but there is a Wikipedia. List of films in
the public domain. Look at that link.
It's a second link. Public domain means
that the copyrights are either expired or the film
never had any copyrights. So if we click on
that list, it says, No government, organization, or individuals own any
copyrights over the work. So if we scroll down, there is a lot of
legal stuff here. But basically, if you
scroll further and further, you will see a
list of the films, and you will find more
information about each of those. And it's quite a lot of them. If I scroll further and
further and further, A star is born, but
not the new one, 1937. Let's see if we can find it. Okay. So technical or
drama, let's copy it. Go to my radar, add new. I will paste it, and I will
add 1937. There it is. As is born 1937. So I can click it, add movie, and it will be
listed in my movies. If I search all, well, you can see color changed. That means if I go
to my QBI Torrent, I can see it already
started downloading. A star is born 1937 remastered. And what this means when
this is downloaded, I can then go to my Jifin
which is running on port 8096. I will go there. So Jifin
I can configure it first. I have to the password. Maybe user name, also Admin. So every service will have the same user name.
I will do next. But now the media library, I can add new one,
content file movies. What I have to do basically
here is to add folder, which is specified in my
Docker compose, of course, if I scroll down,
Jerry fins here. I have to match the
data movies folder because that's what's on
container and as I said, on the left is Path on the host. I have to match container
folder, data movies. Add Data movies. When the film is downloaded, I will be able to watch it
using my Jifin application. I hope it all makes sense. If you have any questions,
let me know in the comments. Thank you for watching.
13. ARR stack with Gluetun VPN (build your own docker-compose.yml file!): Hi, everyone. He a look. This is my newest R stack. As you can see, except of
standard containers like sonar, radar, giffin or Kubitornt, I have now added not only
Bazaar, but most importantly, I have now configured
gluten container so my traffic can
go via VPN tunnel. All of that, all
these services can be deployed in 5 seconds
with one simple command, Docker Compose D. So it's very similar to what we did in previous ARStACRlated video, but that previous ARStAC was a little bit
smaller and you guys asked back then how to add some additional services
like that bazaar, I mentioned, but
most importantly, you asked for that gluten
container which can be added and then used to
manage our VPN connection. Here it is. This video will be a little bit
different though, because I don't
want to just share the completed
Docker compose file with you for you to run. I want today to go
through the process of building that Docker
compose file from scratch. If you ever want to add remove
or change any containers, any services within that file, you will be able
to do it yourself. You will simply understand what every single line in the
Docker compose file does, so you can change so it
does exactly what you want. Yes, you can change it and
adjust it however you want. But before we start building it, let me just show
you how to remove current stack and how easy
it is to run it again. To remove my entire stack, I will simply run Docker
Compose down command. We'll press Enter
and it will stop and remove all the
running containers and will also
remove the network. If I go back to container, should they are now gone. Only Ptainer is up and running, which is not part of the stack. If I want to have my stack back up and running,
just go back. I just press up arrow because
I'm lazy and I will say up D enter and within
a few seconds, not even 3 seconds probably, I should have my stack
back up and running. I see the gluten
is still starting, but if we refresh, it should be now healthy. It is healthy now.
That's how easy it is. But we will now go through the process of building
the Docker compose file. Let me give you a glimpse of what it looks like currently. All those services
use this single file. It's Docker Compose dot Yao you can see that we have all
these services here. We've got a gluten,
we've got the Jerry fin, we've got the cubirrent, reader, lider, bazar, whatever
it's pronounced. My pronunciation is crap
probably, but never mind, Prolar and sonar, answer
and rider, of course. But that's what we are going
to build from scratch. I want you to understand every single line
within that file. All right. So let's close it. Let's remove the stack again. And I will actually
remove everything, you know, I will also
remove the images, and I will remove
even the docker itself to really
start from scratch. I mean nothing installed. Alright, all the stuff
has been now removed. Even if I run like
a Docker command, you can see no such
file or directory. So let's start from the scratch. First, let's run sudo up to get update and Sudo UtgUgrade. So we will have our
system up to date, and I will say day, which will auto answer yes
to any questions, yes. So let's presenter.
So that's now done. Next thing, let's
install Docker. I just run psudoU
install Docker dot IO. But Docker AO does not include
Docker Compose commands. So that's something
we have to add. I will just say Docker Compose. And I will also add
that Y because we need both Docker and
Docker Compose components. I just press Enter and
it's been installed now. Let's just wait for a while. Shouldn't take long again. Okay, well, that's not
exactly unexpected. I can see failure,
but I noticed that sometimes when I uninstall
and install again, quickly, it gives me that failure. You
shouldn't get that. If you have fresh system,
Docker will be fine. If for example,
when I clear that, I run system CtL status
Docker, it says failed. It says, start request
repeat it too quickly. So what we can do, let me just start manually
start Docker. Pudo. Now if I run status,
now it's up and running. Sorry. As I said, it only happens when you uninstall and install
it again shortly after. That's what I did. Never
mind. It's now fine. So we've got Docker
and Docker Compose. We can check Docker,
let's say, images. We've got no images, but the
command works as we can see. We can check the Docker
Compose command. Looks like it works as well. It gives us the
options available. That's fine. So that's done. The thing is by default, all the time you
work with Docker, you have to run those
commands with Sudo, like a sudo docker, blah, blah. If you don't want to run all
those commands with sudo, then you have to add
yourself to a Docker group. What I mean, you can run, it's optional, but it's
worth to do it maybe. Who am I? When I run who am I? I will give me the name of the user that I'm currently
using on this Ubuntu system. So my user is Mark I can use that user Mark and
add it to a Docker group. I run sudo user mode, smallcase A, capital
letter G, AG, and then Docker and then my
username, which is Mark. You have to put it, of course, whatever the output of who I command is
for you at the end. I just press Enter, and now you either have to log
out and log in again, or you can run a command
which is new GRP Docker. This should do the trick
without logging out. And both those user mode and that new GRP commands
are optional. So you can ignore them, but then you have to remember
that all the further commands you would
have to run with sudo Docker, blah, blah, okay? But now because I
run those commands, I shouldn't be I mean, I can run just Docker. So next part is also optional, but I want to install portainer. So we will be able to clearly see our services as we saw
at the very beginning. So once we have them
up and running, we can see them in portainer. But that's all it is. We will not use pertainer
for anything else. We will use it just for
showing the running services. So you can skip this
part if you want to, but it's just two commands
to install ptaer anyways. It's better to Google them. Let's open the Google,
and I say, what? Install portainerUbunt. O typo but never mind. Let's click that stop
link and scroll down, and we have the deployment.
That's what I need. I need this command,
Docker volume, blah, blah. Press Enter and
now second command that downloads and installs
the pertainer server. So here is little copy. Bottom, if you can see
it, I just press that. It's copied now, so I can paste it here
and the press Enter. Unable to find image locally, so it has to download it. But that's normal because I
removed all Docker images. So first time on
first run Docker will also have to pull the image first and only then it's
able to run. Pull complete. So if we go back here and
if we scroll even further, we can see we can log
in by using this, I can copy again here, so I open new window and
I just paste and go. And we've got the ptainer, I will just change the pusswd. You have to set up the password. Okay, login. This password
will be now used. And here we have to
pick the environment, which we only have one local, so we just press that, and you can see well, let's
click containers. We've got one because
the portainer itself is a container. You can see it's up and running. But it's not part of the stack because there is no stack name. All right, but let's building our Docker compose file.
That's the fun part. So let's Google again, maybe. Let's Google for something
like radar, Docker Compose. And we've got the image first. But what I'm really
interested in is that second link from
that linux server dot IO. We will click on that because most of our services will
come from them anyways. We can scroll down
here a little bit, and we should have a template for Docker Compose, and
we can see it's here. And I can use those
two squares here in the top right corner to copy everything to the clipboard.
So I will do that. And now I would usually use
VM or nano text editor, but I just wanted to show you that any text editor is fine. I will use here, we've got the text editor. And I will just paste
my output here. You can also use Notepad on
Windows or text edit on Mac. It does not matter
which one you use. You simply need some
type of text editor. And what you can also
do, that's what I did. I will use those hashtags. Simply hashtag means
ignore that portion. We can use that to clearly
state what service we are building here in
this part of the file. This is radar that makes
things a little bit clearer. And now, because it treats it as a text file, but
let me show you something. If I save it, and
if I save it as Docker compose dot
yaml licksave, it will change because Ubuntu recognizes the Yamel
format, and look at that. This file already looks
much better, I think, yes. Wouldn't you agree? So
that's basically it. We've got the reader for now.
We will leave it as it is. Let me maybe copy this portion, and I will paste it below, and I will say sonar. Maybe even extra space here. And we go back to Google and
we search for Sonar now. Sonar Docker Compose, we can see that Linux
server dot IO again. So that's what we
need. I will go scroll to the Docker
compose portion. And the thing is, we
need those services, that line, we only need it once. This and that we
can simply ignore. What I want to copy
is only this portion. That's what I'm
interested in. So maybe I will use those squares. But once I go back and paste, I will simply remove
those two lines. We have it already here
in first and second line, and we only need it once in entire Docker Compose
file. That's our sonar. So what we need
next maybe prowler. Prowler something we are definitely interested
in the indexer. So I say again, based, and I say prowler. We go back to our Google
and we Google prowler. And again, Linux server dot IO. That's what we
need. Scroll down, Locker Compose services. We can see prowler.
That's what we need, copy, go back here. Paste and remove those
first two lines again. All right. Let's maybe add
a little more spaces here. And now I copy prowler
again, paste it below. But this time, what
else doing it? Well, QB torrent, yes. I say QbtTrrnt. So we go back to the Google. I know it's boring already, but we say QB torrent. Linux server dot IO,
give it torrent, scroll down, or compose, copy I know you say. Geez mark. Boring. Okay, remove
those two lines. I think last one maybe worth pasting at this stage is
jellyfin yes. What do you think? Let me copy that. Let's add Jellyfin as well at this stage. I think I mentioned that, but I want to create a version with no VPN first and only then once we have
it up and running, we will add VPN later on. We will see how networking
changes within that service. So we will not add gluten at this stage, we
will add it later on. So we will just search
for jellyfin now, and for the time being,
that will be it, I think. Jerry fin, Linux server
that I O, scroll down, Docker compose, copy, paste, remove first two lines. And I don't know
what that crap is. I don't think I've
seen that before. I can't remember. Well,
let me just remove it. I don't know what it is,
let's get rid of it. Know what it is for
published server URL. But that's basically it. We've got eryfin,
we've got QBI torrent. Prolar sonar rider. We can save it now, so I
click that save button. And if I go back
here, let me clear. I have to make sure that this Docker compose
file is where I am. I am currently in
my home directory, home Marek, and by default, this text editor will save stuff in the same home location. So if I run LSL, I indeed see this
Docker compose file, and it's February 18, so that's exactly like now
that I've just created it. I just run Docker
Compose up D now. Press Enter and it's creating
the network, Mark default, it called it and it will
start pulling the images because I don't have any
docker images on this Ubuntu. I had but removed them. I will simply go
through the process of pulling all the images
and once it has it, it will start running
them as containers. We just have to wait for a while because I can't remember
how big they are, but usually it
shouldn't take long, but it might take
two or 3 minutes. I don't know. Now it's done. It took around 2 minutes,
probably even not. But if we go back
to our pertainer, we should see them now and
they are here up and running. The stack name is
Mark right now. But you can change
that stack name. If you want to call
it specific way, then you can run the
Docker compose file. 1 second. Let me show you. We run Docker Compose down we will simply stop and
remove current containers, but now it will take only
moment because we already have images pulled so now
everything takes seconds. Let me show you, it
should be cleared now, but I can run it with
Command Docker compose, then P, and now I can specify
the name for my stack. Maybe I want to call it
maybe capital letters, just to make sure that
this is something we came up with. I
press Enter now. Now, it should call it. 1 second. Let me check. I mean, the case will always
be lowercase looks like. I didn't even know because
I've put capital letters, but you will have
lower letters anyways. But yeah, that's
how you can name your stack. Doesn't
really matter. The thing is this R stack works. Well, at least it's
up and running. But now, right now, it's messed up a bit, and it's not very
useful at this stage. Let me tell you,
because we simply copy pasted all the default
configuration for each service. But our biggest issue is here really in that
volumes block. Example, let me show you, let's go down to the
cubit torrent, maybe. Each line here in
that volume, well, I mean, for every service, you will see this column here, and whatever is on the left
side from that column, that will be your location
on the disk on the host. I mean, on the Ubuntu system. Each line it's
called bind mouth. So this column divides the
two separate elements. What you see on the left
side of the column is the location on the host
means on this Ubuntu server, and what you can see
on the right from the column is the location
on the container itself. So whatever container
writes locally to the folder called convic. Container writes to the convic, but Docker actually
writes that stuff to this physical location on
your host operating system. In my case, it's Ubuntu. What that means, this path, let me show you
something. Let's go here. Let me CD to the root folder, to this first forward slash. And now I run PWD maybe just to show that
I'm in the root folder. And if I run LSL, I can see folder called PAP. Well, we didn't check before, but believe me, it wasn't here. You can even see February
18, it was just created. It's just been created. If you compare to the previous command, you will see it's like
a minute after that. This path, if I go even further, let's CD to this path. A L, we have two. Yes. What's that two? Well, it's this part here, Path two. If I go further, CD two. I am now in Path two. We have all that crap. Well, this qubit torrent folder was created because
of this entry here. Basically what
happened, when I run that Docker Compose up command, Docker will simply check all those locations on the
left side from column. It will check all the locations on the host operating system, and if that location
does not exist, it will simply
create that folder, that location on Ubuntu for me. That's exactly what happened. Why this is a problem? Have a look at the
downloads, for example, path to downloads
here on the left. Here, because we are in Path
two already, downloads, this folder is used by QB tourrent to download
the files from whatever, Linux ISO or whatever
you're trying to fetch. But look at that. Download client and
download client downloads. If we scroll up to
raider and Sonar, we can see they are configured with different location
on the host them. For example, radar
says path two, download, dash client,
dash, download. I basically uses this folder. Then Sonar has
again, very similar, but it's again different
because it's Path two, download client, but there is no dash between
download and client. Sonar basically uses that. What happens now QBI turned
downloads to this folder, Sonar tries to read
from that folder, and radar tries to
read from that one. Then another thing, you can see some of them belong to root, I mean use a group root and
some of them to Mark Mar. If I run different command,
let me clear that. LSL first, that's it. But if I run LSLN, that should give you a clue because the root folders
will be the ones that were created by Docker when we run the Docker
Compose up command. But this one was created
by the container itself. All our containers
are configured here with process user ID
and process group ID. They are both set
to 1,000 1,000. And that's exactly
what we can see here. User, this user Mrek has really numerical value of 1,000 for user and
1,000 for group. I can even check that
with command ID, so you can see my user ID
is Marrek and I belong to a group called Marrek which has also the same
numerical value. So here we have a
mix and match of root of root user and Mac user. But if I go, for example, to that qubit torrent, we'll get another up
data, if we go back. So yes, that's another folder that was supposed to
be created. Up data. So if I go to that updata
that's another folder. Okay. All that stuff we can see here was written by BitTorrent by QbtTorrent
in its comofic file. Let me show you what I
mean. I can connect. Maybe I will open
another session. And I say, Docker exact IT, QB torrent, SH, it's for shell. I want to open the shell. And now I go to see
the config folder. I want to check what is
here on the right side. So if I run LSL, I can see another folder, but this time, it's this one, and it was already created by QBI Torrent because we can
see it belongs to Mark Marek. And if I go further, we can see the same files. So Container pink, it writes to that location, conf cubitorn, but it really goes through
the docker and it's written to this physical
location on my Ubuntu server. So that's basically it.
That's how it works. Container fins it writes to
the forwards ConfictFolder, but all the data is actually written to the physical
location on Ubuntu server, and that's our current
problem with the confic. And that problem is that each of those containers read and write to different
download folder. But that's fine. Let's fix it. Let's amend this file. So now let's remove
the containers. I close the terminal, but we
can also remove them here. Here, from the pertainer if
you want to, I say, stop. And I can also remove them. But well, I said that we're not going to use pertainer
for anything, but I lied a bit. Let me open the terminal again. What I want to do is to go
to that root folder again, where we have the puff too. Let me go there. I want to remove that puff. We
don't need it really. So remove pudo, sorry, pudo remove puff. That's it. If we run it now,
there is no path, but we have to either create or use one of the existing
folders to keep our downloads, where we can store our downloads and read from the same folder. All of the containers can read from the same download folder. I thought maybe we will
choose that media. I think we used that before. We can use that media folder. If I go to that media, you can see, well, we
can ignore this one, but we can say it's empty. This is just the image
for the Ubuntu itself. So I go back to the root folder, and I say, sudo, make directory P means P means also create
parent folders if needed. It will not be needed, well, if you decided to
create your own folder, entire path will
be created simply. And now I say R, I
will press Enter. So now, that's what we have. We've got that newly
created R folder. Now we can use this folder as the host location in our
Docker compose file. Let's go to the QBI torrent, and now I say it's not
going to be Path two. It's going to be media, R, QBI torrent, I can leave, but not up data. I will change it to config. If Docker writes to
the config folder, why would I call it updata? I will call it Confic so I
know exactly what it is, and it will write
to this folder. Media R, and then we will have
QBI torrent and conflict. But that's not even
most important one because we really worried
about the downloads. I also change this one to
media R Qb torrent, downloads. That's where our
QBI torrent will be saving all files from now on, and we now have to
match that folder in all services that are
going to read from it. I will simply copy this
physical location. I will scroll up, and in radar, I will replace this
path with my new path. It's supposed to read from media QB torrent and
the same for Sonar. Whatever QBID torrent
saves in download folder, Sonar can now read from that
location, same for reader. I don't think there
are any other services that currently at least
that use that folder. Now, cubic torrent and J fin, elfin doesn't feed from that. Also remember that we are not going to change anything
on the right side. This is what container
uses and we don't want to change anything on the
right side from the colon. But let's go back now to
the very beginning radar. We've got that downloads
folder sorted out, but I still have
that path to radar, blah, blah, something config.
I don't want that path two. I want to replace that with my media R and then
radar it's okay. Again, not data, I want
to collect config. If it's Config, then I want to call it config on
the host as well. And the movies, I want
to change it also to media are radar movies. Whatever radar stores in forward slash Movies
folder will be saved locally on my Ubunto here in media are radar movies. So let's do the
same for sonar now. I say media are sonar and I
will also call it Config. Now, Media R sanar. I want to have all that stuff in sanar folder and TV series. Well, we can leave it. It says
TV on the container side. It's TV series here. That's good enough. I will scroll further.
Now we've got prowler. So I say again, media R prowler, and I will replace
it with conflict. So I know it's ConficFle
for this container. If it stores here, the
configuration files, why would I call it data? I don't even know.
So let's go further. And we've got the Qb
torrent already sorted. But now for Jery fin, you have to remember something. Convict, this is even more silly because why would
you call library? The convict, now, I want to
store it in convict folder. But this is actually
something we want to read from
sonar and radar. I know this is confusing. Remember that Sonar is used
in our stack for TV series, and radar is used for movies. So we have to match the
location of Sonar and radar because we expect
our files to Badr. Basically how it works. We search, for example,
for movie in radar. Radar will send that
request to QBtTrrnt. QBI Torrent will download
the file and it will place it in the media R QBI
Torrent downloads folder. It will inform radar then that the file
has been downloaded, and at this point, Radar will create so called
hurdling in its movie folder, and that hurdling will point to the file that is in QBI
Torrent downloads folder. This is very confusing
because some people think they have the same
file in two locations. One file in media are
QBI Torrent downloads and the second in media
are radar movies. But that's not the truth. The truth is radar in that media radar movies folder only creates a hard link to already existing file to the
file that was downloaded by QBI Torrent and only
creates that link and that link does not take
any space on your hard drive. You have basically
two links that point to the same file location. We had a video about hard
links and soft links. It's Linux fin and if you
want to learn more about it, please watch that video. But what it means for us, we simply have to
point our geri fine to those locations on
Sonar and radar. Again, I don't touch the
right side from the column. This is what container uses and we don't want to
change this portion. We only want to
change that portion. I have to find path
to the TV series. I simply go back and I can see sonar and the path to
TV series, it's this. I copy that and they paste it so Jerry Finn can find the TV series once they are downloaded and
hard link is created. And for movies, I have to scroll up radar and this path the physical location
on my Ubuntu server. I go back and I paste it here. Again, remember, you are not interested in these
values at all. They are simply default values
that container will use. We only change
that left portion. Whew. That was a lot
of changes, wasn't it? Let me just double check if we have everything
as expected. I think we do. Let's see. Let's save it. Let
me see the home Mek. That's my home location. That's where my Docker
compose file is, and we say, Docker compose up D.
That's it. That was quick. What that means, though, if I go there now, again, to the root folder, if I run LSL, you see that there is
no path folder here. But if I go to media, we've got that R
folder, so we go there. Now every service has
its own folder here. If I go to Qubin for example, folder, this downloads folder will be used by all services. I mean, by radar and sonar, because they are configured
to read from that location. Any might have noticed that
we again have mix and match of root created
folders and marks. So we can change it by 1
second. Let me clear that. I can say psudo change owner, hone R, or you can
run dash recursive. But if you just run R, that's shorter. So it's 1,000. I want to use user
1,000 and group 1,000 and I want to apply
that to media R folder. If I run it now, it asks me for password because I need
psudoPassord for that. If I run the command now, doesn't matter where I am. Everything should belong now
to user 1,000 group 1,000. I know it's called Mac
Mark. That's irrelevant. We are talking about
these numerical values where Mac is user
1,000, group is 1,000. That's because every
single container belongs to that group as well. Simply this group is used, whatever Docker run
those containers, yes. But those containers are presented to Ubuntu
system as these users, the user ID thousand and
group ID 1,000 as well. That's how Ubuntu system
sees every single container. If we have all the containers with the same user
ID and group ID, then we shouldn't have
any problems reading and writing to any of
those locations really because they all belong already to user
1,000 group 1,000. And yes, that's fine,
but somebody might say, Mark, but what about the VPN? Let's add the VPN, okay? Yes. Let me tell you shortly
how the traffic goes now. Currently, Docker uses a
default bridge network, and currently our
traffic goes out with our public IP address that we got from our Internet
service provider. But our ISP can see where
all those connections go to. What's the destination
IP address? So what we're going to do now, we're going to change
this behavior, and we will add gluten VPN
container that will send the encrypted traffic
to a chosen server. Our VPN provider first and
only then this traffic will be forwarded further
with the IP address also changed to
that VPN provider. So when the traffic goes back, it will also go back first to our VPN provider first and only then as encrypted traffic will go back to our gluten container. So that's how our
traffic will change. But for that to happen, we
need two things, actually. We need that gluten container, and then we also need some
kind of VPN provider. So a company that provides
that VPN connection for us. On gluten, we can configure it, but for example, I
went for Nord VPN. I mean, it's not
sponsored by Nord VPN, but you can choose
whatever you want. The gluten service can be
configured with Nord VPN, surf Shark, and any
other popular provider. You can find templates that make it very easy
to configure it. So let's first maybe Google. No, first let's get rid of
those containers, okay? Okay, that's what happens when you are in wrong directory. Okay, let's go to Google again. And this time, let's
Google for gluten. How you write it
gluten docker compose. And it's not from linux
dot IO this time. Let's click on that first
link maybe from Github. So let's scroll
down. You'll have all the explanation
what it is, blah, blah. And let's go further
and further. Actually, it was, you know, supports VPN, cyber ghost, Expos BPN, blah, blah, blah. As you can see, I think all most known providers
are supported. So let's scroll further, and we've got the setup. And it says, Here is Docker
compose for the laziest. But what we can do
instead, it says, these are now instructions specific for each VPN provider. If we go to that Wiki, it's Wikipedia for gluten. And as I said, I've
got the Nord VPNs, so I can go to providers
here, setup providers. And I simply where is
that Nord VPN. It's here. I just click on that
and that gives me the Docker Compose template for service provider Nord VPN. If you go with another provider like Express VPN Fastest VPN, you simply click
on related links. So I will copy this one and I
just paste it in my convict in my Docker compose
file. It's past it here. I don't need that
services or version, but I will copy that so we can have clear division
between those services. That's the gluton
already configured. But you know what I noticed it's missing container
name for some reason. I don't know why,
but let's copy that. Because if we don't
have container name, it will get random names. So that's not really
what we need. We paste that and I will
say container name, gluten. Will make it clear that our container name
will be gluten, not some random container name. Then if you wonder
what this is like a network administration
access because gluten has to configure a device
called DevNet tun. So it has to have
access or permissions to be able to create that VPN tunnel for us,
and that's how it's done. Using this CAP ad
and those devices. And now environment, we
don't have to change I mean, provider, we don't
have to change. The open VPN or Wire guard, it's your choice what
you want to use. But the thing is the
most important thing is that user and password. That's something you will
get from a provider. For example, I signed
up for the Nord VPN. If I log on now to
my Nord VPN account, If I go to that Nord VPN, if I scroll down, I've got I mean advanced
settings, manual setup. If I click on that setup
Nord VPN manually, I will have something
called service credentials. To get that service credentials, I have to verify email. And now I can see my service
credentials for the service. So I had to pay for that, and how you obtain
service credentials might vary from
provider to provider. You have to figure out where
to find service credentials. This is where you can find
them on Nord VPN page. But if you choose
different provider, you have to figure out where these service credentials
can be found. So for me, they are here, so I can simply copy them. So this is user name. I
just copy it clipboard. My username is DT and
my password is DT. But Server countries,
this is optional. If we go back to gluten, it says required
environment variables and optional variables. Server countries,
this will simply say, you can see come a separated
list of countries. You can state what
countries you want to connect to when
you use that VPN. What I mean, I have it
configured with Netherlands. I mean, it was configured
with Netherlands by default. That means our public
IP address given to us by Nord VPN will always be
somewhere in Netherlands. But you can add some more
like Germany or whatever. After a comma, or
you can even be more specific because
they say here, you can have not
server countries, but server regions or
even server cities. You can have list of cities
where they have servers and they can give you a
public IP from that location. You will see what I
mean in a minute. So don't worry about
it. But that's it. Basically my config right now that's my Docker
compose file. I just save and let's
try to run it now. I say, Docker Compose D, and let's see if it works. Can see it's pulling
the gluton image. Says download a new image, and they're all up
and running now. If I go to Pertainer, they should be here, and I can see gluten is now added as well. If I check the logs for gluten, al logs Gluton. Look at that. Public IP address
is whatever it is, but it's from Netherlands. That's exactly what
this confic does. Server countries Netherlands. And if we disconnect
and connect again, we probably get
different IP address, but it will still be
from Netherlands. Every time our traffic, even though I'm in UK, I I can show you let
me show you something. Let's say I do doer exac IT, QB trent, and then
Shell, connect to Shell. If I run curl I configure me, you can see that I have
different IP address. This is one address, and this
is different IP address. Basically, well, maybe
even better if I go here, just Google what is my IP. That's my IP, 924098 blah, blah. And you can see that
I'm in England. That's correct. Country
United Kingdom. That means my gluten service
has now a VPN tunnel between MIP and the Netherlands
IP address. That's fine. We've got that gluten,
but right now it doesn't do anything
because we haven't routed any container
traffic through that gluten container yet or through
that gluten VPN connection. Let's now redirect our traffic through the gluten container
and through that VPN tunnel. Let's get out of here. Maybe clear. I say,
Docker compose down. And to redirect the traffic
through the gluton service, I simply add one line, and this line should say
network pode service gluton. That service gluton should be
in quotation marks, right? So I just copy that. So the
radar will be redirected. Now, let's redirect the
sonar prowler cubitorrent. And Jerry Finn, we
can ignore really. Jerry Finn mainly just reads
the data from those volumes, but it doesn't do
much networking, so we can leave it as it
is. Doesn't really matter. But what we have to do next, all those ports that we can see, like every service
has its port, yes. For example, radar
has port 7878 on the host and 7878 on
the container side. And it's used, for
example, this host port, 7878, it's used when I want
to connect to that service. So I would type HTDP
local host 7878. How I would connect
to the radar. Well, it's down now,
so I can't connect. But basically, that's
how it used to work. But now because we change
that network mode, now the gluton is the service that deals with our
networking, right? So that means we have to
get rid of those ports from here and we have to paste them as part of
gluton configuration. So let me cut it from here. This is for radar. Yeah.
So I will go down to gluton and I will paste it doesn't really
matter somewhere here, maybe. Ports, 7878. They will add information
this was radar, wasn't it? That was radar, but I have to do it for other containers as well. Any container that was
passed through gluten, I will have to remove
this part from here. So this is the sonar. I have to remove
it from here and paste it as a gluten
configuration now. Good radar ser now prowler. Cut it from here and
we paste it there. I'm just making a note which
service it belongs to, so it's easier to
find it later on. That was the prowler and born QBI torrent has three
ports because it has web GUI, that's how we connect
to the QBI torrent, but it also has torrenting port and we have to remove
the mole from here. We paste it here as well. All right. I think that's it. So let's save it and let's
see if it still works. I will use up arrow,
Docker compose, up D, press Enter, and they are up and running. Again, let's see if the traffic
14. Route any docker container through VPN! : You guys asked how to redirect any Docker container traffic
through gluton VPN client? That's what we're going
to do in this video. I mean, we did something
similar for R stack, but in that video,
the gluten VPN, the qubit torrent, and the R apps like Prolar
sonar or radar, they were all part of the
same Docker compose file. You asked, though, what if I
want to reroute a traffic of a Docker container that is a standalone container that
is not part of that stack. We will see how it can be done, and we will use something called container mode and service mode. Let's start from the beginning. I will use Nord VPN
as my provider, and well, this video is
not sponsored by them. This is simply what I use. But the solution presented will work with most popular
VPN providers, not only Nord VPN, but surf shark or whatever
you have there. This is my Ubuntu server. I tend to use Ubunto
but Linux will do. Let's just see again
how we read route the traffic within the
Docker compose file. I will open the terminal first. And if you have fresh
installation of Linux, then you will need to run
some commands to be able to run Docker and
Docker Compose files. You need to run sudo up to
get update and sudo up to get upgrade first. The
Sudo password. Once you've got that, we
can clear that first maybe. You need to run sudo app
install docker dot IO and Docker Compose to be able to run Docker and Docker
Compose commands. As you can see, I already
have it installed, but if you have
fresh installation, then you will need to run
this command anyways. That's fine. The next one is optional, but if you don't want to run sudo all the time with
the Docker commands, then you have to run
one more command. Well, I mean, you have
to run who am I first that will show you your
current user on this system, and then you have to add
that user to Docker Group. You have to run command
Sudo user mode Ag Docker and now that user that
was just displayed. Now you either have to
log off and log on again, or you can simply run
one more command, which is new group Docker, which is Spelt NEWGRP.
That's all we need. We've got now Docker
and Docker Compose. Let's see what it looks like. When we want to reroute within Docker Compose file,
within the stack, we'll go to Google and I don't know what
containers we are going to use. Maybe Qb torrent again. I'll say QB Torrent
Docker Compose. This is the one from
Linux server dot. And I'll scroll
down. Further and further. Oh, here it is. Docker Compose says recommended. I copy that using
these little squares. And I opened the text editor because that's most user
friendly for everybody, I think. I will paste it here
and I will save it as doer compose dot Yamal. By default, it will be
saved in my home directory, which is M home
forward slash Mark, I say safe and
that's my QBtTrrent. Let's add maybe prowler
we had last time. I say Prowler, Docker Compose. That's the one. Scroll down. Again, docker compose section, I will copy it and
I paste it here. But this time, I don't need those two lines services
and those dashes. We only need it once
and it's already here. We get rid of that part. That's fine. Let's
save it as it is, and maybe let's check
if it works at all. I go back to my terminal, I run LSL because if I run PWD, I am already in my
home directory. This Docker compose
file is here. It's exactly this file. I say Docker Compose
up D. Let's Enter. And we've got QB torrent
and prowler up and running. If I run Docker PS, I can see they are
up for 15 seconds, but we are not
rerouting anything yet. I don't have gluten or I don't
have configured Nord VPN. So if I run now,
let's say on my host for maybe let's
clear on my host, I run curlipinfo dot IO. It will tell me what
my current IP is and what is my current
location. I am in England. That's correct. And
my IP starts with 92. It's 9240, right? And the time zone is Europe, London, because
that's where I am. If I check the same on
any of the containers, so let's say doer exac IT QBtTrrent SH for Shell,
we're connecting to Shell. Now I'm logged on
to my container, but I run the same command
CurliP info dot IO. I can see the information
is exactly the same. I can also run
Curl if config me. This will show me just
my public IP address that was given to me by my
Internet service provider. All right, let's exit. Let's clear that maybe.
So that's what it is. I am in London and my IP
starts with 92 dot 40. Let's now add gluten VPN. So I will go back to Google. I will search for gluten, and they also say
Docker Compose. Maybe. That's fine. First link at the top from the Github.
Let's click on that. Now let's scroll down, and then we will see
a you can see setup. You can see an example here, but even better if you go to that Wiki Wikipedia for gluten, and you find whatever
your provider is. You have table of
contents setup providers. Just click on that providers and then find whoever
your provider is. You've got VPN, cyber ghost, Express VPN, fastest VPN, et cetera, you know,
loads and loads of them. But for me, it's Nord VPN. If you have Surf Shark,
you've got it here as well, I'll go back and I
will click Nord VPN. And here I can find Docker
Compose template as well. So I will click, those
little squares to copy it. I will go back to my file, and I will now add gluton. And I don't need, again, that top things, the
version and services. I only need gluten. Services here just once, and all the services are listed here, CubicrntPwler, and gluten. But we need to modify it
a little bit, at least. First of all, I don't know why it doesn't have
container name. You can see container name
here, prowler, but gluten, for some reason, doesn't
have container name, so we can add it manually. I say container name, gluten. And second thing are my
credentials from my provider. My provider is Nord VPN, and I need the user and the password that Nord
VPN gives me. 1 second. Let me move it here maybe and let me add some spices,
so it's clearer. This is what I'm talking
about. Open VPN user and open VPN password. And where can I find
it? I have to go to the website of my
provider, Nord VPN. I have to sign in to my account. I click that Nord VPN, and then I scroll down to set up Nord VPN manually
in advanced settings. If I click on that, I can
see service credentials. That's exactly what I
need. And to see them, I have to verify email again. This is my username
and my password that I can use in that
gluten configuration. So I just copy user name.
We'll paste it here. Go back and copy my password and paste
it here. That's cool. Let's save this file again. I click Save and then
let's go back to terminal, and I say, or just use up arrow. I say Docker Compose
up D. Press Enter. And Docker can see that
qubit torrent is fine. We didn't change anything. Prolar is fine because we
didn't change anything. It simply added new
service, which is gluten. But note that at this stage, we are not routing anything
through that gluton. Okay, so we know that's working. We can now go back to our file to route traffic
through the gluten container. When we are within the same Docker file, it's pretty simple. We have to add here in Line six, I say network mode, and I say service gluten. And I can do the same
with the prowler. Here maybe under image,
it doesn't really matter, but I say the same Network
mode service gluten. That's the first
bit only though, because then I have
to move the ports. Whatever ports I've got
in those containers now, I have to move them from this container to the
gluton container. So I say cut, and I
paste them here in the gluton configuration
because now my gluton is responsible for the networking
for those containers. So I paste this. I can add a little comment saying
this is for prowler. And then I also have to do
the same for QB torrent. I copy them as well or cut, I should say, and
paste them here. That's all I need. Now,
I just click Save again. I save this new configuration. I go back to my
terminal and I say, Docker compose up D again. As you can see, that's
not how it's done because I was supposed
to take them down first. I should have said down. It's Docker Compose down, and now let me up arrow. I say Docker Compose
up D, press Enter. And now it works.
As you can see, sometimes you can just re
run Docker compose up. Sometimes you can.
Sometimes you have to take the entire stack down to
be able to rebuild it. Let me clear it maybe. That mess. So I say Docker ps, and they are up and running. We've got prowler, QB
torrent and gluton. So let's go back to prowler now maybe or QB torrent
doesn't matter. Docker exact IT QB torrent SH, and now we run that curl command again. However, look at that. It says, I'm in Amsterdam, but I've been just in
London, so what happened? Well, that's the configuration. Current configuration says, we go back to the
Docker Compose. In the configuration
for nod VPN, we can say what countries or what regions or even what cities we want that VPN to connect to. And our IP is shown as if we were physically
in that location, whatever we type here, you know? So because Netherlands
was by default here, I am shown as being in Netherlands, in
Amsterdam, exactly. So it works as
expected to confirm the prowler because it should be also tunneled through the VPN. Let's exit this.
Let's clear maybe. I say Docker Exact
IT prowler SH. Run the same command,
call I convict me. Well, that will just
show me the IPS. What I need is clip info dot IO. And indeed, Prowler is also
tunneled through that VPN. But now, this is cool,
and this is running. But what if I want to add another container that is
not part of this stack? Let's say I want to add well, maybe completely
different container. Maybe Firefox. Yes, you can
run Firefox as a container. Let's search for
Firefox Docker Compose. However, I will not want to
run it as Docker Compose, but never mind, it
will work fine. We've got Firefox
from Linux server.io. That's what I need.
We now scroll down and we have Docker Compose, and that's what I would
want to use if I wanted to add this to my stack to
the Docker Compose stack. But we also have Docker CLI. I can run it as standalone
Docker container completely separate
from that stack. I simply run this command, Docker Run D name,
blah, blah, blah. But before we do that, have
a look. These are the ports. Basically the this portion
is equivalent to that. If you read the documentation, you will notice that
port 3,000 is for basic HTTP traffic and
port 3,001 is for HTTPS, to be able to tunnel this container through
the gluten VPN, I have to add this port
first to the gluten. Mean first before I even run this Docker container,
right? Run command. Because if I want to run
this through the gluten VPN, we have to kind of
prepare gluten container. So I will just configure
port 3,000, maybe. Let me show you what I
mean. Let's go back here, and I add another port. I will add port 3,000 on the host and 3,000
on the container, and I say, This is
Firefox. Let me save it. And let me say Up
arrow up arrow. Let's do Docker Compose up dD. I didn't take it down
again, so Alright. So yes, repeat.
Docker Compose down. I keep forgetting about it. Sometimes the Docker is a
bit more like forgivable, you know, but definitely not
for ports, as you can see, anything to do with ports, you have to take the
stack down first, and then you have
to run up D again. It will not let you just add, change the configuration
for the ports. Never mind. Now the stack
is up and running again, gluten qubit torrent
and prowler. So now let me clear that. I now should be able
to run this command. I will copy it. I
will paste it here. But remember what we have to do. We moved those parts to gluten, so we have to get
rid of them here. We don't need them here anymore. It's exactly the same process as we did with the
other containers. And I will leave
that boxer maybe. I will remove only this portion. And now there is one
thing I want to add, and it's a network mode, but this time, it's
container gluten. So I say dash,
dash, network mode. Equals container gluten. And now I should be able
to run it. Let's see. Oh, sorry. Network mode is if
we use I will use up arrow. Network mode is if we use it
in the Docker compose file. Here, it's not network
mode, it's simply network. So let's get rid of that mode. Just network equals
container gluten. Let's try again. And now
it's up and running. This long hash is the identifier for our
Firefox container. And if I run Docker peers, I can see that Firefox here, and I can connect
to that container by going to Local
host on port 3,000. I say HTTP, call on forward sward Local
host and port 3,000. We will see you have
browser within browsers. But what I'm interested in,
let me open a new card, I ask, what is my IP? I already gives you the
hint from Dutch to English, that already means it
works as expected. Why does it want to
translate from Dutch? That's because I'm connected
to Netherlands again. Amsterdam, exactly. Which I can also confirm simply here from the
terminal. Let's clear again. If I run Docker Exec
Dutch IT Firefox SH, I can run that curl
IP info to Com on. That also confirms that
this Docker container, even though it's not
part of the stack, we can use the network command to point it not to
service this time, but to container gluten. But remember that
gluten has to be already up and
running and it should already have the
port prepared for this new container that we want to pass through gluten VPN. That's all I wanted
to show you today. I really hope all of that makes sense and thank
you for watching.
15. Build background remover app ! (using docker container): Let me show how you can build your own
background removal tool, you don't need to
know anything about programming and you don't need to know or have any other
experience, to be honest. You just need to
follow this video to have this tool up and running locally on your PC or laptop or whatever
you have there. But first, let's have a look what this tool even looks like. This is it is basically it. As you can see, it has its own web user
interface, and it's pretty basic, that's what I like
because it has only one purpose to remove the background from
a chosen picture, which means there is no need for million buttons and very
complicated user interface. And let me just add this is
running on my local machine. As you can see, local host means I don't connect to anything
outside of my network. This is basically
running on this machine. And as you might have noticed, I use Wo desktop
operating system, but you can do the same
on Windows, on Mac, or on any other operating system where Docker can be installed. And in fact, Docker can be
installed on almost anything. So it doesn't really matter what operating system you
have on your machine, on your laptop or PC.
And how it works. I don't think I have
any pictures here, so we might have to
download one quickly. I will just Google cut, and I will search for images. We've got some lovely cuts. What about this one?
Let's save that one. Now the picture should
be in my downloads, so I can go back to my tool, and I noticed I can
click anywhere I want. I don't have to even
click on those icons, or you can also drag and drop. So let me click somewhere here, and now I can choose the file that has
just been downloaded. I select it. I wait
for a few seconds, and we will see what happens. Now, new file just showed up. And if you click on
that, you can see it's exactly the same cat but
with the background removed. I realized maybe that wasn't
really challenging for the program because this cat has already Bag
background anyways. Maybe let's pick this or maybe
let's search for elephant. What about this one, first one. Let's save the elephant and
let's go back to our tool, and this time, we can pick
the elephant picture. Now I click Select,
wait few seconds again and shortly already there. You can see new file has
been created to confirm, if we click elephant, that's
our original picture. And if we click the new file
that has just been created, it's the same elephant, but again, with
background removed. Okay, I think that's it.
We know how it works. We know what it looks like. So let's build it from scratch
now. I will close that. I will move that window away, and this is another
Ubuntu instance. But this Ubuntu doesn't
have anything installed. It doesn't have that program. We will build it from
scratch on this server. And how do we even
start doing it? We have to go to
Google and I will search for RIMBg
WebApp Tutorial, and maybe I will
add Github as well, because that's
exactly what we want. And now, it's not my project. If we click that we can see the author is Jeff
Delany from Fire Ship. It's his repository,
so it's his project, but we are okay to use it. Basically, you can read the Red me file and
some instructions. You can see exactly what
is there, the entire code. But what we want
really to do here is just click the
bottom with the code, and there are various ways
to download this repository. I will just download as a zip
probably the easiest way. Took just a few seconds, and I will click it again
and I will just unzip. So now in my downloads, I've got zipped and
unzipped version. Maybe let me open terminal. If I go to my downloads, I can see those the
folder and the Zip file. Let's see the to that
folder to unzipped one. You can see all files here. You might be
interested, of course, in read me MD file, but I will just check
Docker file. What's there. But basically, this is the
base image for this project, and here is a
little instruction. Download this, and this is the link to avoid
unnecessary download. Other interesting
thing is that expose. It tells us what port this
application will listen on. You notice that
application was running on local host port 5100. So this is very important information for us
because we know now what is the port that
this application listens on. But first, let's
download this file. I can just copy that
and run command, maybe clear that first, can run command WG and paste dot link. Now just enter. I
will clear again. And if we check the files now, you will see new file downloaded nt dot
blah blah. All right. Now there is a few commands
that I have to run as a root or if you were on
Windows as administrator, so here on Ubuntu, I will just do sudo su. So now I'm running as a root. And first command I need
is to install Docker, because what we are going to do, we are going to build
a Docker image, and all these files will
be included in that image. So on OT, it's app
install doer dot IO. Obviously, if you are
on Windows or Mac, just Google how to install
Docker. They will click Enter. Okay. So Docker is installed or maybe I installed
it, but I can't remember. But basically, even if
you didn't have it, after running this command, you will have it installed. Next, we can build
the Docker image, but it's important to be
in the same folder still. I am here in my home Mark downloads in
the unzipped folder. Now, let me clear again. The command I need is Docker
build dot and the dot means build a Docker
image based on the Docker file that is located exactly here
in this folder. And then we can add
T, which is tag. Tag means whatever we want to call it. Maybe we'll call it. You can call it whatever you
want, but maybe let's call it REM BG remove background. Now I click Enter and the
image is being built. You can see from PyTom
it will download all the necessarybndances
and images, et cetera. All we have to do is wait. Awesome. It took maybe 2 minutes or so, but we can see
successfully built, and this is the image ID, but we also created that tag, which is much
easier to remember, which is REM BG. But that means if we run
now Docker Images command, we can see a new image
that was created. I can see I have some old ones, but we can see that
image tagged as REM BG. Awesome. Now we can just
run it. Let me clear again. And there are many
ways you can run it, but I think the easiest
one is just run network. Host, this is simply the
type of the network and D, the tag we gave for our image, which was REMBGRmove background. Now just click Enter. It will
display that long string, but you can also
check if you want. You can run command Docker ps. You can see our image has
been running for 28 seconds. What that means, we can now
connect to our application. Open new tab, and we type
local host column 5100. We have exactly the
same application as we did on the
other Ubuntu server. Now we can I haven't
got any pictures here, but you can basically perform the same operation as we
did at the very beginning. So I hope that helps, and that's all I wanted to show you today.
Thanks for watching.
16. (pre-req for Proxmox vlan-aware video) What is VLAN? How does vlan work?: This video, I would like
to talk about villains. I'm pretty sure you came across that term more than once
and maybe you wonder, what is that villain exactly? How do villains even work? What is the technology
behind them? What is, for example, dot one Q tag or what
is default villain? That's all what we're
going to talk about today. Villain stands for virtual
local Area Network. For now, we can skip
maybe that word virtual. We will focus on
the second part, which is Local Area Network. What is that local
Area network exactly? A local Area network is something we already
have in our homes. It's a collection of our devices that are connected
to our router, either wired or wirelessly. Our phone, our laptop, smart TV, wireless printer, if they are all connected to the same device that our
service provider gave us, and if we didn't change any
configuration on that device, then all our devices at home create what is known
a local Area network. If we configure our devices to be visible on that
local network, then all those devices
in our home can see each other and they can
communicate with each other. That is an local Area network. What is Vlan then? Let's say we bought some CCTV
cameras and we don't want those cameras to be in the same local Area network as our computers or
our home servers. I need those cameras,
but believe me, it's better not to have them in the same network as all
of our other devices. God knows what type of software is on them
and what it does, if it's spies on us or worse, I don't know. What
I can do then? Is I can split my single
local area network into multiple isolated
virtual local area networks. In computer networking, you'll often hear that term virtual. For example, when you split one big server into
smaller logical servers, you call them virtual
servers or virtual machines. The same with N. Once you start dividing your lane
into smaller chunks, you call those
junks virtual lans, which simply means
they will become logically or virtually
separated from each other. But anyways, I was talking
about those CCTV cameras. But you might also want to separate other network devices. You want to separate your
network even further. Maybe you want to
keep your printers in separate designated
Vlan and maybe you want to separate servers
from the PCs, et cetera. This is basically the way that every single company separates
their network as well. It's safetier, it's
easier to manage, and also what you do is you make smaller
broadcast domains. But let's not talk about
broadcast domains, maybe. Let me simply show you how
it's done and how villains can make all those different devices completely separated
from each other. For that, I will
use a free software called Packet Tracer. It lets you simulate real
life computer networks, and you can get
this tool for free. This tracer is free. You just have to sign up
to Net Academy website. It's not sponsored video. It's simply really nice tool and it's free, so can't complain. As you can see, I created
a very simple network with just four computers and they are connected to that single
switch in the middle. Currently, they are all in the same line. I
didn't do anything. The only thing I did
here is I configured IP addresses and MAC addresses
for all those devices. But we will split that
network later on. So you can think of
those top devices as maybe computers and bottom devices as maybe
those CCTV cameras that we want to isolate. At this stage, it
does not matter because as I said,
at this stage, they can all talk to
each other because they are in the n. But
you might say now, Mark, but I don't have
a switch at home. It doesn't really look
like my home network. Well, the fact is you
already have a switch. That device you got from your
Internet service provider, and although we call it we usually refer to that
device as a router, it's really an all in one
device that, in fact, has a switch built in
all devices at home that connect wirelessly or are to those usually
yellow ports, they are connected to that built in switch in that
all in one device. But back to our example. We have those four computers and since all those computers
are on the same network, they can connect and they
can ping each other. What I mean pink, I
can, for example, if I click on that device
and I go to command prompt, I should be able to
pin, let's say, PC one. It has IP address of 10.0.0.1. And you can see it works fine. We have the response
from PC one, from the IP address 10001. As I said, each computer currently has
assigned IP address. I assigned it manually and that IP address is known as
Layer three address. Plus, it also has MAC address, which is also known as physical address or
Layer two address. And if you want to
learn more about those addresses and
about those layers, then please watch one
of the previous videos when we talked about OSI model. But for now, let me just
tell you that this switch in the middle does not have any
idea what an IP address is. The switch cannot interpret any layer three information
like this IP address. This switch is a
very simple device. It's really just
a connection box that has very
limited information. And in fact the only
information it holds is what MAC address is connected
to each of these boards. Mac address is something
you can check on your computer if
you're on IP address. No, sorry, IP
config, forwards OL. Can see this is
physical address, which is that MAC address. This is something you
usually don't configure. This is assigned by the manufacturer of the
network interface card, but you can also see IP address, which you can either configure
yourself as I did here, or you will get this IP
address automatically from your all in one device that you got from your
ISPeed provider. Each device has different
IP address and it will have different physical address Layer two address, Mac address. You can see I simplified
a little bit. I said DDD, but MAC address
is a little bit longer. I just wanted to keep it short. I didn't want to type all
those letters. Never mind. What I'm really interested
in is I can close that This is called
managed switch. So if I click on that and if I go to CLI command
line interface, I can run command, show
Mac address table. And what I can see here. Let me move it slightly. As I said, this is all this
switch can actually see. It can see that Mac address AA is connected to port Fa 01, and Mac address DDDD is
connected to port FA 04. But why only those two
are displayed anyways? Well, the fact is the
switch only knows about Mac addresses once it can see some traffic
in the network. We did that ping from
PC four to PC one. Switch learned about
those two MAC addresses, but it hasn't seen any traffic yet between PC two and PC three. It learns those
MAC addresses and those parts only after receiving and sending
some traffic, which means if I go to
PC two, for example, to command prompt and I ping
100 dot zero dot three. Now we get the response
from PC three. They can close it if I open the switch configuration again and if I run exactly
the same command, show Mac address
table, interesting. It already forgot the
previous Mac addresses. Yes. But if we run again, sorry one I will explain. I will say, we are on PC one, so 10.0.0.4, that's fine. And if we go to switch again, if I run that
command again, now, it can see all four devices. Why here, it only displayed
those two and not previous ones because it already forgot about
previous Mac addresses. It keeps that information
only for a while. It can also be configured, how long it keeps that
information in cache. But if it can't see any
traffic for a while, it will simply drop
that information. How is that obtained exactly? Let's just step
back and think what happens when I send pink
from PC one to PC four. The process is that
PC one creates so called packet first and that
packet has several fields. One of those fields
is called a payload and in our examples just
a simple pink command. Payload is the data that it
holds this packet holds, and in our case, it's
a simple pink command. That PC one adds then another information
to that payload. It adds so called IP header. And that IP header will include the source and
destination IP addresses. So source being PC one, IP address 10.0.0.1, and PC
four is our destination. So we'll put the destination
address ten.zero.04. However, that PC one will also
see that to reach PC four, it will have to pass that
switch in the middle. This PC one is connected to
switch using Ethernet cable, so it knows it has to
create another header. This time, it's called
Ethernet header, not Internet, but Ethernet. This header will include source and destination
MAC addresses. Because remember, this
switch in the middle has no idea what
an IP address is. So so far, all the information
that was created by PC one is useless for switch because switch can only read
MAC addresses ready. So PC one simply adds source and destination
MAC address as well. So in this case, it adds AAA as a source and DDD as
destination MAC address. And only then the pink is sent. And the first time
this switch in the middle sees that
incoming packet, I mean, together with Ethert information,
it's called frame. Packet is just with
the IP information. It's like it doesn't
really matter. Basically this pink is sent. The switch in the middle
can see it and it can see that the source
MAC address is AAA, so it will save it in
the Mac address table. However, at this stage, it doesn't know yet where is the destination
MAC address, DDD. So what it does, it will send that frame out of all
of its interfaces, except of the interface that
it received this frame. This behavior is known
as unknown Unicast. Simply it forwards
this information further to all devices, hoping that one of them
has that Mac address DDD, and it will respond
to that message, the PC four indeed
responds to that ping. Once the switch in
the middle sees that response from PC
four, from now on, it will know that PC four is connected to Interface FA 04, and it will save this McAddress
in its Mac address table. This switch in the
middle operates only using those layer two
addresses, the Mac addresses, and only maintains
that thing called Mac address table
where it records which computer simply
connects to which port. But you keep looking at
that and you're like, Mark, you are missing one
important information. What is that villain? We've got villin information. What's that villain one? Thing is, we haven't
configured any villains yet and this managed
switch by default, will have something
called default villain. If it's not configured
with anything, all those interfaces
and all those devices will land in the same villain
called default villain. Basically, if you don't
configure anything, you land in villin one
in default villain. But now I want to
start splitting, this one local Area network into multiple virtual
local area networks. How do I do that? We will see clearly why it's
called virtual. The thing is, we are not
changing physically anything. We don't pull any cables. We don't buy another switch. Physically, it stays
exactly the same. Our network, we will only
change the logical setup, and I will only have to
reconfigure the switch. So the devices like computers
or maybe CCTV cameras, they will not even be aware that they land in some villains. This information
is only configured on the switch in the
middle. How do I do that? On the Cisco switch, I run command enable
or simply EM. I know these letters
are very small. I don't know if I can even
make it large hope it's okay. So it's enable and it's
configured terminal, which can be shortened to CFT. And then let's say we
want these top devices, which are maybe our computers. We want them to
be in villin ten, and the bottom devices will be maybe in Villain
20, let's say. So what I can do,
I can simply run command interface FA 01. This is the interface
I want to configure. I have to make sure that interface is configured
as mode access. So I say switchboard
mode access, not teach part, but switchboard. Switch port access Vin ten. We can see it says
access villain does not exist,
creating Vin ten. The first port you want
to assign to VLN ten, if that villain is
not preconfigured, then it will be created
for you automatically, so you don't have
to worry about it. But we want to put port Fa two as well in the
same villains. So I simply say interface FA 02, and again, switchport mode
access, I can use up arrow. Again, V ten switchboard
access Vin ten. Let's continue and
we will configure PC three and PC four in villain 20. But remember, we
don't touch the PCs. All configuration is
done on switch only. I say interface fa03, switchboard mode
access, but this time, switchboard acess villain 20
Vilan 20 also didn't exist, so it will be created
for me automatically. The same for interface fa04. Using up arrow because I'm lazy, mode access, villain 20. That's all. You will
see that orange dots while it's being
reconfigured on the switch. But you can see PC is not aware that anything
changed at all. But let's now close this. Let's go to PC one, and let's pin maybe PC four. We did it yes just like what minute ago or
a few minutes ago. So let me just use up arrow, and I will re run
the same command. Press Enter. What happens
now? Let's have a look. Something is different already. You can see it and it
says request timed out. But what the I was able to
access it just a minute ago. So what happened? What
is different now? The fact is, if I go
back to switch and if I run show MAC
address table now, so I have to exit first, we
can see just one MAC address. Isn't that interesting?
Well, the thing is, if you look carefully, if you look again, you will also see villin number has changed. The thing is now
top pieces are in villain ten and the bottom
pieces are in villain 20. And we tried to pink
PC four from PC one. What happened this
time, that pin was also sent to the switch, but this time switch will only forward that information to the interfaces that are in the same villain and PC four is no longer in
the same villain. Switch checks that frame. It can see it's supposed
to go to Mac Address DDD, but it doesn't have Mac
address DDD in Van ten. Switch will only check
Mac address table for vilanten Because this
MAC address is not there, it will simply
send it out all of the ports only those parts
that are in villain ten. And the fact is none of those
pieces have McAddress DDD, and none of these pieces
have IP address 10.0.0.4. That's why we simply
don't get any response. The PC will say request
timeout and the McAddress will only stay with that
one source MAC address. But what we can do, we can
go to PC one and PC three. I mean, maybe PC one first. We should be able to pink 0002 because it's in
the same villain. It works, as you
can see it works. I can now go to PC three
and I should be able to pin 100 dot zero dot
four because they are also in the same
villain villain 20. Now if I go to switch again, if I run show Mcddress table, now we have full picture. This pink worked
from PC one to PC two and pink from PC three
to PC four also worked. Switch was able to learn
all those Mac addresses, but they are in completely
separate villains now. They are not able
to talk between Vin ten and villain 20 are
completely separate switches. You can consider as being
completely separate devices. In fact, our switch is
currently split into three little switches because whatever you connect
to port FI 01 and 02, will land in villain ten. Whatever you connect
to FI 03 and 04, will land in villain 20, and all the other ports, it has 24 ports. If I hover over it should
display them all. Look at that. All remaining ports, if I
connect something there, they will land in
default Vlan one. What really happens in the
background here is that the PC still sends the same frame out with
the pink information, with the IP information, and with the Ethernet headers. However, once switch
receives that, it will add yet
another information. For any traffic that
lands on port FA 01, it will add something called
dot one Qtag or VLN tag and this.1q tag will
have the information that this traffic belongs to
Vlan ten within that switch, it will only be forwarded further based on
that information. And then when it's sent
out towards PC two, then this information
will be stripped off. The villin information is only inside that switch,
the same for PC three. When a switch
receives that pink, it will add the information that this traffic belongs
to VLN 20 and will only be forwarded to devices
that are connected to the interfaces that have
the same villin tag. But once that pink is
sent out towards PC four, villain information
is also stripped off. PC is unaware that it
belongs to any villain. It all happens
within that switch. And if we had another
computer that connects to that switch on default
villain on villain one, then one Q tag is not added. But if we had only one computer connected to one
port on villin one, it wouldn't be able to talk to anybody because devices
connected to default villain can only talk to other devices connected to
the same default villain, which by default on
switch is a villain one. Villin one simply means
or default villain simply means no dot
one Qtag is added, and it works this way. So you can connect to some
old or cheaper devices, usually unmanaged switches, where you can't
configure villains. Very often at home, you
will have unmanaged switch. You can't even log
on to that switch. You can't run like
any of those comms. Haven't got any view
what's going on inside. Those switches will only
have default villain, villain one and you
can't reconfigure them. Basically, this is
the method where you can connect to this
is managed switch, and you can connect to
it unmanaged switch, but they will only
be able to work on that default villain. I
hope that makes sense. If you wonder whether there is a maximum number of those
small virtual networks, those villains that can be
created on that large switch, then the answer is you can
create over 4,000 villains. So many more than
you will ever need. But yes, there is
maximum amount of villains that you can
create on a single switch. That's all I wanted
to say today. So I hope that makes sense, and thank you for watching.
17. (pre-req for Proxmox vlan-aware video) Access port vs Trunk port: This video, I want to talk
about the differences between trankports
and access ports. Tran port and access port is something that you can
configure on a switch at home or in your
company network as long as it's a
managed switch. Managed switch means
simply a switch that lets you change
its configuration. In previous video, we talked
about VLANs and VLAN tags, also known as eight oh 21q tags, and we know that we
can configure VLANs on the switch to divide
your local area network, your an into smaller virtual
local area networks. Villains. We know our devices by default are in
the same villain, villain, also called
default villain. If you have non managed
switch at home, that means you are also
in that default villain. If all these hosts are
in the same villain, then all those
devices, in this case, we've got some PCs, we've got some laptops and here PCs again. All those devices should be
able to talk to each other. You can see I
configure them with the IP addresses and also
set MAC addresses for them. PC one has IP
address of 10.0.0.1, PC two has ten.zero.02,
et cetera. We can send, for example,
pink from this device. I will go to desktop
to command prompt, and then you should
be able to pink what maybe 100 dot zero dot six, which is PC number
six here down. I press Enter and we can
see it's up and running. The pink is working as expected. If we go to switch
configuration, if I run Show villain,
command, for example, I can see that indeed
all the ports on the switch and this switch
has 24 ports altogether, all those ports
belong to villin one, which is default villain. By the way, on Cisco devices, you will also see that crap. It's not used for
the last 30 years, but it's kept just for
backwards compatibility. But basically, all
the ports always belong to villin one if
you don't change anything. If we run, for example, show Macadress table, we can see MAC addresses of those pieces that connect to this switch. But because we only could see the traffic between
PC two and PC six, Switch currently
only knows about those two computers
and they have Mac addresses BB and
FFF respectively. Because we remember
switch in the middle does not know what
IP address is. The switch in the
middle will use the MAC addresses to
forward the traffic. From switch perspective,
the traffic goes from Mac address BBB to FFF. That's exactly what
we can see here. And then if we run
command, for example, show Interface FA
02, switchboard. Which is the port that
PC two connects to. We can see FA 02 and
Mac address BBB. That means PC two connects to
port FA two on the switch. We can see that the port
mode is called access. It's static access, exactly. So it's operational
mode, static access. But what that mode
access actually means, the mode aces on the
port on the switch is designed to connect
devices, I mean, hosts like PCs or
servers or whatever that have no idea what a villain tag. Maybe devices that
are simply not configured or ready to receive a frame with a VLAN tag because currently VLAN tags are
not in use in our case. All devices connect to default villain and default villain
doesn't use any villain tags, eight oh 21q tags. But there is yet another
mode for a port, and it's called trunk port. That mode trunk is a mode
where you connect any port on that switch to another device that simply can
understand villin tags, another device that can
receive villain tags, and it knows what to do with. In current setup, to be honest, if we change the mode
from access to trunk, this will not change
much because we currently simply don't
use any villain tags. From the previous video, we remember that to use VLN tags, we have to first
configure the villains. Let's put laptop three and laptop four in separate villain. Let's say villin
ten. To do that, I run command first enable EN, but I'm already in that mode. I run configured terminal, which is CT in short. And then I have to connect to port three and four
because laptop three is connected to port
three and Laptop four is connected to
port four on the switch. So I say interface FA
03 for laptop three, and I say switchport
mode access, just to make sure it's
in this access mode. And then I say switchport
access. Villin ten. I put this laptop in villin ten, and because that
villain didn't exist, this switch will create that villain
automatically for me. We can see the orange circle, that means the port restarts, reconfigures itself, and then we do the same for port fa04. I just use up arrow because
I'm lazy, so I say again, switchport mode
access to make sure we are dealing with access port because only access port can be put in one specific villain, and I say switchboard
access villain ten. And now let's configure
PC five and PC six. Let's put them in
villain 20 maybe. So I say Fa 05 for PC five, I say switchboard mode access, Switchboard access
villain 20 this time. Again, that villain
didn't exist, so switch created one for me, and I say Fa 06. It's the port on the switch
where PC six connects to. And I do again mode
access, Access villin 20. That's it. Now I can
exit the configuration. Mode. And if I run
Show villain now, I will see that indeed
villain one still exists, and PC one and PC
two still belong to that villain because I didn't change anything on
those first two ports. I only changed
port FA 03, FI 04. I put them in villain ten, and then I configured
port FA 05 and six, and I put them in villain 20. We can see villain ID 20. So from now on, as
we can Remember, we don't change
anything on the PCs. They are not even aware they are put in any villains, yes. Note that we only change
the switch configuration. All of those ports are
still access ports. They are not trunk ports. They are still access ports. However, we've got three
different villains now. FA 01 and 02 are in
default villain, FA three and four
are in violin ten, and FA five and six
are in violin 20, but they are all
still access ports. And what it means
now from, let's say, laptop three if I run
Common prompt from now on, I only will be able
to pink Laptop four. I can only reach the devices that are in the
same villain on the switch. If I pink, let's say 1000
dot four, that will work. Laptop four is in the
same villain villain ten. But if I pink, let's say
006, that will not work. Why? Because 10.0.0.6, PC
six is in different villin. We can see the
request timed out. And if we go to the switch, if we run Show Macaddress table, the only traffic that
switch could see now is between FI 03 and 04. When we run the pink from
Laptop three to Laptop four, that was traffic within VLN ten. And if we quickly
pink from PC six, let's say, pink ten.zero.05, we can see that works. And also from PC two, if we pink 001, if we pink PC one, we also should be able because they're
both in default villains. But what I mean, if we
go now to switch and we rerun that command
so Macaddress table, now we can see all the devices, but we can also see how they are spread between those villains
villain one, ten, 20. That makes sense. So
what's that tripod then? Where is it? How do we use it? Let's say maybe that one
switch is not enough for me. Maybe I've got a big company, maybe I've got three floors, and I want to have
multiple switches. Maybe I want to have one switch on each of those
floors in my company. So I have to create kind of a connection between those
switches. Let me add one. I say switch. I will
choose this one. I want to add another switch, and I will need
some more devices, maybe some more pieces on that, let's say, second floor. And I want this particular PC, for example, PC seven, maybe I want it to be in villain 20. Maybe I want it to be able to communicate with PC
five and PC six. So how do I do that?
The answer is, I can simply connect
those two switches. Let's say port 07. I will connect it to also
FI 07 on this switch, but I will configure that connection as a
switchboard mode trunk. I go to switch configuration. For this one, I can
see FI 07 is now up so I can run CFT
interface FA 07, and I say switchboard. Mode, trunk this
time. That's it. I simply go to this switch, switch one on the second
floor or whatever it was. I go to CLI and I say
again, enable first CFT. We will waste some time because switch fins that's
the domain name. Let me maybe prevent
that from happening. I will say no IP
domain lookup in. Sorry CFT, of course, no IP domain lookup. We can ignore that. All
right. But never mind. What I need is
interface FI 07 and I want it to be switchboard
mode trunk. That's it. But let me explain what we're trying to
achieve here first, how our traffic
currently looks like. When I send a pink
from PC one to PC two, that pink will have some data. In our case, it will
be simply a pink. It will have source IP
address of 10.0.0.1. It will have destination IP
of 100 dot zero dot two. That's what PC one is building. This is called packet when
you have that information, and then it will add MAC
addresses because it can see it's connected to sich and switch doesn't know what
are the IP addresses. PC one understands
that it will have to add source and
destination MAC address. It adds AIA as a source
MAC address and B B B B as destination MAC address and only then forwards that
frame to the switch, switch can see that and it
will forward this information further to all devices that
are in default villain. Default villain
means no villain tag is attached. This frame. And it forwards it as
it is to B B, B B. However, if we send the same
pink from PC six to PC five, this time, the data
is still our pink. The source IP is 100
dot zero dot six. The destination IP is 10.0.0.5. The source MAC is FFFF. Destination MAC is EEEE and such frame goes
to the switch. However, when switch receives
that frame on port FA 06, the switch can see that this port is configured
with villain Tug 20. That means the
switch will attach additional information
called eight oh 21q tag, and we put the villain
identifier there in that field. It will say VLAN 20. This traffic belongs
to villin 20, so it can only
forward that frame to any other device that
is within that villain, and only one other device
is in this villain. It's PC five. So the switch forwards it to port number five. But then on the way out, it will strip off that,
that villain information. When PC five
receives that frame, it doesn't even realize it's in a villain from PC perspective, it doesn't belong to
any villains because all of that happens
internally within the switch. However, what if
we want to place the PC seven also in villain 20? We can use that trunk port
because the trunk port is the port that does strip of
any V and tag information. So basically, the
default villain villain one can travel via
this trunk port. The villain ten and
villin 20 can also travel further using this trunk port because the behavior
changes now. Trunk port is simply a
port where the switch will forward the frame as it is
with the vilantag information. Let me show you what I mean. If I go to that PC and I
configure it with IP address, let's say 10.0.0.7,
and I can assign MAC. ABCD, maybe. Let's connect it. To port FA 01, maybe let's move
it a little bit. If I go to the switch
configuration and the PC is connected to Interface
Fa 01 on this switch, I can say switchport
mode access. Switchport access, VLM, and then whatever villain
I want, maybe 20. I will put it in villain 20. That means from now on, this PC seven is able to communicate with
PC five and six. Let's check. I say,
let me exit that. Let's go to the PC, command
prompt, IS pink 10.0.0.6. You can see it
works fine because PC seven sends the traffic from 10.0.0.7 to IP 10.0.0.6
with its own MAC address, and the destination MAC
address is this FFFF. But when switch one
receives that frame, it can see this switch port
is configured with VLNTug 20. It will add that
VLN tug and it will send it out every single port
that belongs to villin 20. But in this case,
there are no hosts in villin 20 but there
is a trunk port, and trunk port belongs
to all villains. So it forwards this traffic
out of this trunk port. And when this switch
receives that traffic, it will forward it further, but only to devices that are in villain 20, but
at the same time, because it knows
MAC address FFFF is connected to port FA 06 and
its switchboard mode access, it will strip off this
villain information again, and it will send
this frame to PC six as if it was in
no villain at all. But note that this PC seven cannot talk to,
let's say, PC one. If I pink, 10001, I am not able to reach it. To reach villain
one, I have to be also in default villain
in villain one. If I wanted to place this
PC in default villain, then I would do CT.
Interface FA zero, one, switchboard mode access, switchboard access, villain one. We can see the port
reconfigures on the switch, and if I go to the PC, when it turns green
as it is now, I should be able to
pink, let's say, 10.0.0.1, and I can. But if I try 06 that
worked just a minute ago, let me run up our room. Now I'm not able to reach villin 20 because
this switchboard, FI 01 on switch one does not
belong to villin 20 anymore. It's default villain
and default villain means there is no villin
tag added at any point of this path because default
villain is the one that does not add any
eight oh 21q tags. Basically the main
difference is that access port will keep stripping off that
villain information, that one Qtag while trunk will forward it as it is with
villain identifier. So all villains can travel via that one
single cable here. But bear in mind, there is one thing wrong
here right now because we've got three villains and villain is a
layer two concept, but they are all in the same subnt and subnet
is a layer three concept. And we talked about
layer two and layer in the video
about OSI model. You might want to revisit
that one because basically you want to really match layer two and layer
three information. We shouldn't have them
all in the same subnet. For example, if
you have these in subnet 100 dot zero dot 024, then these laptops should really be in different
subnet as well. If they are in villin ten, maybe we will create
another subnet for them 100 dot ten dot zero, let's say, because these pieces at the
bottom are in villin 20, layer two, maybe we want to
create a layer three subnet. Let's say 10.0.20.0 slash 24. Layer two and Layer three
are two different concepts, two different layers, and
that's something to consider. But it's kind of also unrelated to access
port and trunk port. So yes, I hope that
all makes sense, and thank you for watching, and I will see you in the
next episode. Thank you.
18. Proxmox vlan configuration (vlan aware Proxmox): In this video, I
wanted to discuss Proxmox networking
and specifically that villain aware configuration that we might have already seen and wonder what it is for. Villain aware means simply to be able to handle VLAN tags. And in previous videos, we discussed those
networking topics like what villain is, how it works, what VLAN tag is, also known as.1q tag, I mean, we also discussed what is
the difference between access port and trunk that you can configure
on your switch, and these topics are
prerequisites for this video. You need to understand those computer network
technologies to fully get what we are going to configure in this video about that
villain O Proxmx. But anyways, I hope you
are all up to speed them. Let's see what it is all about. If we go to the node
to PVE in my case, to Network tab, we can
see some entries already. And what are those entries? Well, the first four
network devices are my physical
interfaces on my MiniPC. My mini PC has four
Ethernet ports, and they are shown as four
network devices here. You can see only
first one is active because I only have
one cable connected, so only that ENP two s
zero is shown as active. This VMBR zero, what this is, it's called Linux bridge. It's something like a virtual
switch, let's call it, and it's default switch that was created by Proxmox
during the installation. When I installed my Proxmox, I gave it address
of 1921 681.201, and that's what is shown here. We can also see that
port ENP as zero, it belongs to that bridge. To that virtual
switch, let's call it. If I double click on it, we
can see the same information. We can see the bridge
port ENP two as zero, and we can see that
villain or config, but it's antiqued currently. And the other thing, if we go to Shell and
if I run command cat, let's see, network interfaces. You basically see exactly
the same information. I mean, we couldn't see
the loopback interface. It was not included
there, but we could see all those four
physical interfaces, and here is our bridge. Currently, it's
configured statically. I gave it this IP
address, this gateway, and we can see the only
port that belongs to that bridge is ENP two
as zero. That's cool. You can easily add more
ports to this bridge. You can see space separated
list of interfaces. If I wanted to add
another interface, let's say ENP three as
zero, I just type it here, click Okay, and now have another interface that is part of the same virtual switch. What you can also see is
that pending changes below. It says, either reboot or apply configuration because it
needs something to activate. What they mean is this button. I will click that apply
configuration. I will say yes. Something will run
in the background, but basically what it does, it's reconfiguring this file. If we go to Shell, if we run the up arrow at see
network interfaces, we can see port this new port ENP three was added
to our config. But that's not what we
were talking about today. If I double click on that, I will remove it maybe. Let's go back to what it
was at the beginning. I will apply configuration. We have the default
config again, what if this is not
my only network? What if I have maybe
102020 dot zero, and they put it
in villain 20 and maybe I have another
110.30.30.0, and they put that
network in villin 30. How do I configure my prox Mx to be able to reach
all of those networks? One of the solutions
would be to create more virtual switches
and assign ports. I will show you
quickly how it's done. I will create Linux bridge. It's automatically chosen
the name for it. It's fine. That would be 102020, maybe IP maybe 124. And they say create. And I can add another
bridge, maybe VMBR two. Oops, sorry, NB one first, Bridgeport, ENP
three as zero, yes. Okay, so we have physical
interface as well. And now create bridge,
another bridge, and I say it's
103030 maybe seven. Slash 24. It's
different IP address, but in the network 103030, I say Bridgepoard is
ENP four as zero. And that's basically
it. If somebody asks, Mark, but why didn't you
click that villain Award? You said that villin 20 is for ten to 2020 and Vilan
30 is for 103030. Well, in this case, if you
created separate bridges, for specific villain, you
have also separate cables, which you can connect to the
access ports on the switch. Let's say on villin 20,
but access port and VLAN 30 but access port on
the switch on the other side, and that Vintag will
be stripped off automatically because that's access port on that other side. And on the access port, no villin tags are allowed. Are stripped off before they are forwarded to Proxmx.
That's why it would work. But we can achieve the
same using just one cable. If you, for example,
have one port only, you can only use
one cable anyways. But then we can configure the
switch on the other side as a switchport mode trunk and trunk is a member of all Vilans. That means if the
traffic has no villain, it will land at
default interface. In our case is this one. This interface doesn't
expect any villin tags, but for traffic with
VilanTag 20 or Vilantag 30, we have to create interfaces that expect that
kind of traffic. So let me show you what I mean. Let me remove this first.
Let me remove that. And if I go to VM BR zero, I can make it Vilanaware now. I click Okay, I will
apply configuration. Yes. If we go back to Shell
and check the config, and for this portion,
nothing changed. But you can see at the
end bridge villain OR, yes, and bridge VIDs 2-4094. What it means, why one
isn't it included? Villain one is a
default villain, and it will be still
received by this interface, this VMBRzero static interface because it doesn't
expect any villin tags. But if I go back to network, I create Linux
villain this time. There are actually
two ways I can create an interface that can receive a traffic with
villin identifier. You can see it says, for
example, VMBrzero 100. Let's see, VM BR zero, but I will say dot 20. What happened here, Proxmox automatically created so
called sub interface, and that sub
interface belongs to main interface VMBRzero to that switch to that
bridge I mean, and it automatically
expects VilanTag of 20. And now I can also assign IP address 102020,
whatever it was. I don't know, 55, it can
be doesn't really matter, as long as it's the
same network as remaining network that
I have configured in villin 20 on my switch
on the other side. So if I create that now, I have something
that can receive the traffic with Vlan
Pug identifier 20, and I have Layer three
interface, which is 102-02-0505. If I create another villin
for VLAN 30 this time, I can do again Vmbzero
30, if I wanted. That will automatically
assign VilanTag and the interface will
belong to VMBRZero. But the other way of doing it is I can simply put
whatever I want here, maybe Mark 30, let's say, and manually assign Raw device, which is the bridge VMBRzero. That's the only one we have and VLNTag it doesn't have
to match my name at all. It can be 77 or whatever, but I need VLNTag 30. I have to configure
it with villin tag 30 because that's what we expect on this
villin interface. And the last but not least, I want also to have IP
address on this interface, 103030, and whatever
IP is available, maybe 88 24, and I
create that interface. Now if I go to Shell, if I check my config, you can't see anything. Why? Because I forgot to
apply the configuration. So now let's go
back to the shell, run that command again, and now we can see full
config on the Proxmox. This main interface has
IP address of that. This is the main interface
that will receive the traffic where no
VLAN tag is added, means it will process the
traffic for default Vlan, but we have two more
interfaces now. This one is sub interface, and Proxmox by just
looking at that will know it will
belong to VM BR zero, and it will expect VLAN tag 20. And this config is a
little bit longer. That's kind of equivalent
of interface villain. So while, this one
is sub interface, this is kind of like
interface villain. If you come from Cisco
world in networking terms, or at least that's how I see it. We can call this interface
villain whatever we want. We can assign IP address, but villain identifier
that we expect, we have to specify separately. And we also have to
specify which of those virtual switches
will process this traffic, and we configure it
with VM BR zero, which includes this
physical interface. So these are two ways of
basically doing the same thing. And then if you wonder how to add your virtual
machines or containers, like C containers, for example, if you want to add them to
specific sub interface, then you simply
create a container and whatever, let's go next. With only one template. I'll go next, next, next. Network is what
I'm interested in. By default, it want to go to that default VMBR zero
on the default VLM, but nothing stops me
from changing it to, let's say, vilantag 30. And then maybe I want to
put it on that network. Then 30, 30, whatever, maybe 88 on Network 24. This way, I will attach
this Aalaxy container to this villain interface, I created on that
virtual switch. I know it might be
a bit complicated, but watching previous videos
where we discussed villains, villain tags and access ports and tranports will help a lot. If you need, you might
have to revisit those. That's all I wanted to say
about Proxmox villains, so I hope it was helpful
and thank you for watching.
19. Thank you!: I hope you had a great time
and I hope you learned a lot. Please remember to visit
Automation Avenue platform if you want to learn even
more IT related stuff. But thank you for choosing this training and thank
you for watching Arik.