Why OpenVZ and not XEN.

After one year of operation with XEN, I chosed to move Fridu from XEN paravirtualization, to OpenVZ container model. Here after some explanations on the why of this change and the description of my new architecture.

Table of Content

Disclaimer

Anything I wrote here was done outside of my professional work context and  none of my current/past employers/customers have participate or even be consulted for this work. Fridu is 100% part of my free time, and everything including hosting is funded on our pocket money and used to support non commercial friend organisations. While I think I have the technical background to design a smart architecture (cf: my profile). I nevertheless do not garanty that it will work for you, or even that you will agree with me. I still hope it may help some of you and I would be more than happy to incorporate improvement if ever you have some.

Demonstration/Video

This demonstrations is a live screencast done with xvidcap on Linux, it shows how to create a new virtual machine through Proxmox OpenVZ web graphic interface, and then shows how to expose the newly created zone to the outside world with three different mechanisms: ***, port forwarding and reverse proxy. Demo run for about 25mn, its using video flash and hopefully runs smootly on any plateform.

My ISP new OpenVZ architecture

The new Fridu architecture is very similar to the old Fridu-XEN, it uses the same firewall and provides the same networking port forwarding facilities and leverage Open××× for direct access to virtual machines. The only changes are the transfert from Xen to OpenVZ and the introduction of a reverse proxy at hypervisor level.
 

Xen is rock solid, but ...

I had no issues with XEN functionalities or stability. I never had to reboot any zones and shut down my system with 360days of update time, which mean XEN never went down from the time I switched on my system on, to the time I moved to a new dedicated server with OpenVZ. My Xen config is described in following post (here).
I nevertheless have two main reproach to Xen, the first one is that it locks physical RAM and thus requirer RAM that I do not have on my cheap dedicated server. The second one is that XEN to XEN networking is extremely slow, which is in fact the only MUST solve problem I see to Xen.  

OpenVZ limited but light weight and simple.

OpenVZ has one strong limit compare to XEN, it is not a full visualization and  therefore you're limited to Linux only containers. People working with Sun will recognize Solaris zones concept, that was introduced few years ago. Like for Solaris every OpenVZ zones shared the same kernel, which at OVH translate in a Linux-2.6.24.7 kernel. This being said, it is important to understand that Linux distributions are independent of kernel, you can therefore run any Linux distributions you want under a unique kernel. While OVH ships Debian Etch with OpenVZ hyperviseur, you can chose any other distribution for your zones, new version of Fridu mostly operated with Ubuntu, but nothing prevents you from running multiple distributions. OVH ships template for Debian, CentOS, Gentoo and Ubuntu, but if this is not enough you can either create your own template or download one from Internet (OpenVz-WIKI)
OpenVZ includes a set of scripts to create/manage virtual machines, unlike Xen that is shipped naked and where I had to write more or less equivalent scripts by myself (cf: Fridu Xen Quick Start). Furthermore  OVH ships OpenVZ with a web console from Proxmox, not that I'm a big fan of having a GUI, but as you can see on the video, it is great to make sexy demos.This console allows you to create a new virtual instances literally in a mater of seconds :) It allows you to start/stop change ram size, IP adresses, etc. on any instances without forcing you to remember any special commands. While Proxmox console misses few features like an SSH applet, a firewall config, or a java ×××. I must say that I get used to it and create every virtual machine through the web GUI.
OpenVZ is very light weight, not only it shares the same kernel, but also the same filesystem and networking stack. Direct result is that, on a given server you can run more OpenVZ zones than you could run XEN virtual-machines. From a user point of view when a zone is up, wether you run OpenVZ or XEN is fairly transparent, this being said they are nevertheless some fundamental differences:
  • OpenVZ ram is taken from hypervisor global pool (RAM+Swap), when Xen takes RAM from hypervisor physical ram and handles its private swap by virtual machines. Direct result is that with OpenVZ you can allocate more RAM that you effectly have, on the other hand if one zone lack RAM your zone will not expend on swap. In fact OpenVZ refuses to allocate RAM to requesting process, which for most of them translates in a core dump :( As swaping is never a good idea, both methods could finaly look more or less equivalent, this being said the small advantage of OpenVZ strategy is that it is easier to share unused RAM in between differents zones, which may have some important for cheap hosting configuration where RAM is limited.
  • OpenVZ zones and hyperviseur share the same kernel, as a result some functions fails (ex: changing system time) while this should not be an issue, it breaks more scripts than it should :(
  • No boot console is a real bad point for OpenVZ. It makes template creation complex. In fact you have no idea of what happen until you can access to your system, which obvisouly never happen when boot fail :( The fact you can access any given zone logs directly from the hyperviseur through /vz/private/zone-id directory makes debug possible, but far less simple than with a full boot console as provided by Xen. Note that some distributions like Debian will write into your VM /var/log/init.log the trace of your boot sequence, unfortunatly this is not working on ever distribution, Ubuntu being one of them.
  • A snoopable network interface, this is a very good point for OpenVZ. While Xen also allows to snoop your network interface with TCPdump or Wireshark, Solaris zones does not support it. Snooping network  interfaces is a key feature for debugging any network infrastructure and almost mandatory to understand what happen when your hosting configuration is not working.
  • A share filesystem, the main advantage is that you can access directly to any zone private disk from the hypervisor, the limit is that you cannot leverage a SAN interface directly from the zone as under xen with ISCSI. This being said for cheap hosting OpenVZ option is a better choice.
  • Proxmox web GUI, while this is not OpenVZ as it, it is shipped standard and allows you to create/manage your virtual machines in a very nice and intuitive way.Further more Proxmox provide a list of appliances that you can download from ftp://pve.proxmox.com/appliances this allow you to install drupla, joomla as well as many other nice tool in a mater of seconds, nice isn't it ?

Fridu OpenVZ networking config

I reused the exact same firewall I had with XEN, did not even change one line from previous script. Nevertheless this time I used a more simple configuration with only one security zone, and  added of a reverse proxy (pound) for http port redirection on the adequate virtual machine.
OpenVZ hypervisor runs firewall rules for port firtering and forward to any service of a given zone and pound reverse proxy is handling http, redirecting get/post to the adequate web server. Note that if I had more public IP adresse I could get rid of my reverse proxy, but I use a cheap OVH plan (kimsiffi) which only provide two public IPs.
You can find detail options about Fridu Firewall  (here) my current config is somehow simple and explained here after.
Security zones: OpenVZ default config provides one virtual network interface (vnet0) at the hypervisor level. While this does not enforced strong isolation in between zones, you can nevertheless select a given outgoing IP adress from internal submask, which hopefully should be more than enough for most of us.
Here after an extract from my Firewall config, with my two public IP adresses. Both zones share the same internal (vnet0) and external (eth0) network interfaces. The only difference comes from internal network submasks 10.10.101/10.10.102.
# Zones definition
# -----------------
IP_ONE=91.121.173.80
IP_TWO=87.98.139.141

CreateZone NAME=zOne   NIC=eth0 EXT=$IP_ONE BR=venet0 INT=10.10.101.0 MASK=255.255.255.0
CreateZone NAME=zTwo   NIC=eth0 EXT=$IP_TWO BR=venet0 INT=10.10.102.0 MASK=255.255.255.0
 
Right after zone definition, comes the hypervisor network configuration. The hypervisor is in a special zone named "none". As it has direct access to external network interface (eth0) port forwarding is unnecessary and we only define port filtering is required. In my config, I need to open:
  • Port TCP/22 for SSH
  • Port TCP/80 and 443 for HTTP/HTTPS
  • Port 5900 is used by VNCterm to provide console thought VNC applet.
  • Port UDP/44096 andTCP/453 for Open×××
# Application Port & Forwarding (default ACCEPT = none)
# ------------------------------------------------------
CreateApp  NAME=DOM0_SSH       ZONE=none      EXT=tcp:22        INT=eth0 ;# ssh Fridu.net goes to dom0
CreateApp  NAME=DOM0_WWW     ZONE=none      EXT=tcp:80        INT=eth0 ;# https need for promox console
CreateApp  NAME=DOM0_SSL       ZONE=none      EXT=tcp:443      INT=eth0 ;# https need for promox console
CreateApp  NAME=DOM0_VNCt     ZONE=none      EXT=tcp:5900     INT=eth0 ;# Virtual Machine console through ×××c
CreateApp  NAME=DOM0_×××t     ZONE=none      EXT=tcp:563       INT=eth0 ;# Open××× in TCP
CreateApp  NAME=DOM0_×××u    ZONE=none      EXT=udp:44096 INT=eth0 ;# check User Custom Rules later 
The last mandatory part of firewall rules is how to forward port to a designated virtual machine. For example mapping SSH port (tcp/22) to a external port (ex:tcp/2215) to allow the administrator to access to a given machine without passing thought the ×××.  In following configuration extract we see how I redirect mail SMPT/IMAP traffic, as well as HTTP (Apache/Tomcat) or SSH. With following configuration a "ssh -p 2215 root@my-pubic-ip" will connect  to ssh port 22 on virtual machine 10.10.101.5, and an http://my-public-ip:815 will connect on my tomcat instance on 10.10.101.5, etc.
# map external mail SMPT to Mail zone port 2525 to seperate from local traffic that arrive through ××× on port 25
CreateApp  NAME=Mail_SMTP   ZONE=zOne    EXT=tcp:25      INT=10.10.101.1:2525
CreateApp  NAME=Mail_IMAP   ZONE=zOne    EXT=tcp:993     INT=10.10.101.1:993
 
# Any traffic coming from IP-TWO/port 80 is redirected to 10.10.102.2
CreateApp  NAME=Domi_WEB    ZONE=zTwo    EXT=tcp:80      INT=10.10.102.2:80
 
# Allow SSH and HTTP direct mapping for 10.10.101.5
CreateApp  NAME=SSO_SSH    ZONE=zOne    EXT=tcp:2215     INT=10.10.101.5:22
CreateApp  NAME=SSO_WWW    ZONE=zOne    EXT=tcp:8015     INT=10.10.101.5:80
CreateApp  NAME=SSO_TOM    ZONE=zOne    EXT=tcp:8115     INT=10.10.101.5:8180
While Fridu Firewall will do the job with only previous rules, you may want to add some extra rules to optimize your configuration.  Three set of rules in my custom optimization section:
  • FTP one is only required if you want FTP to work in passive mode, which is often the case.
  • Second one is almost mandatory, (tun+/vnet+) allow ××× to reach any zone directly, while (vnet+/vnet+) allows zone to zone connection. If you run a share ××× you MUST have tun+/vnet+, while vnet+/vnet+ might only be needed if one zone runs a service shared by other, example LDAP, DNS, ...
  • Last one is an hugely hack that allow my TCP *** on port 563 to be viewed as sitting on port 443 when targeting my IP-TWO. This allow me to have HTTPS/IP-ONE and ×××/IP-TWO when both share the same port and the same external interface (eth0)
# User Before/After Zone Custom Tables (before-input|output|forwarding, after-input|...)
# ----------------------------------------------------------------------------------
if test "$ACTION" = "start" ; then
  # DoIt modprobe  -s ip_conntrack_ftp                                    # load FTP session tacking

  # we're not a bank make our life simple
  DoIt iptables  -A after-forwarding -i venet+ -o venet+  -j ACCEPT   # allow VM to talk together
  DoIt iptables  -A after-input      -i tun+    -j ACCEPT               # allow ××× talk to dom0
  DoIt iptables  -A after-forwarding -i tun+    -o venet+  -j ACCEPT   # allow ××× talk to zones
  DoIt iptables  -A after-forwarding -i venet+ -o tun+     -j ACCEPT   # allow Zones talk to ×××

  # Make SSL on IP-two to be redirected on port 563
  DoIt iptables  -A PREROUTING -t nat -i eth0 --destination 87.98.139.141 --proto tcp --dport 443 -j DNAT --to 91.121.173.80:563

fi

Download Fridu firewall

Fridu firewall has been extended to support not only OpenVZ but also XEN or VirtualBox and has now its dedicated page.

My Reverse proxy configuration.

One change with my previous architecture is the addon of a reverse proxy, while no mandatory this add a lot of flexibility.  I found pound to be exactly what I needed. While pound is originally more target to be use as a load balancer, fail over and SSL end point, it works very well as a generic proxy.Pound is part of standard distribution and thus a simple "apt-get install pound" is enough to get it run. While documentation is very limitted, the configuration file remains simple enough for this not to be an issue.
  • 1st define a listen IP-ADDR/Port for your reverse proxy
  • 2nd define your services, with as input your DNS name as known by external internet users,  and as output your OpenVZ internal zone + port.
In following config sample, my pound wait on interface 91.121.173.80 (Fridu public addres) port 80. It then forwards any http request with destination "www.fridu.net" to virtual machine named into the hypervisor /etc/hosts "vz-opensso" on port 8180, and request to zxid.fridu.net to the same virtual machine but on port 81.
## redirect all requests on port 8080 ("ListenHTTP") to the local webserver (see "Service" below):
ListenHTTP
        Address 91.121.173.80
        Port    80
        # my services definition
        Service
                HeadRequire "Host:.*www.fridu.net.*"
                BackEnd
                        Address vz-opensso
                        Port    8180
                End
        End

        Service
                HeadRequire "Host:.*zxid.fridu.net.*"
                BackEnd
                        Address vz-opensso
                        Port    81
                End
        End
End

Dnsmaq a small domain named server.

Running a local DNS name server is not mandatory it is nevertheless very convenient to have your VMs being able to use local name of others VMs. For such a small usage a full dns like "bind9" is useless and "dnsmasq" is what we need. When running "dnsmasq" on your hypervisor, it will reads your "/etc/hosts"  and serve it to every VMs, any other request are forward to the DNS defined in "/etc/resolv.conf" . As a result "dnsmasq" config is minimal, you only have to define which interface you listen and your default domain extention.
 root@ks362337:~# cat /etc/dnsmasq.conf
# Configuration file for dnsmasq.
# "/usr/sbin/dnsmasq --help" or "man 8 dnsmasq" for details.

# Domain anything you want but "local" or client will request mdns
domain=***
expand-hosts

# Never forward names without a dot or domain part
bogus-priv

# /etc/resolv.conf is static, no monitoring needed
no-poll

# Do not listen eth0
except-interface=eth0
Last but not least you need your VMs to point on your local DNS, which is hopefully your template default.
root@vz-fulup:~# cat /etc/resolv.conf
search ovh.net
nameserver YOUR_HYPERVISOR-IP
nameserver 127.0.0.1

Open×××

While being optional I highly recommend Open××× for such an architecture, it is an easy to install product and it significantly simplify administrator and developer task. Open××× is available out of the box with any distribution. Fridu configuration is avaliable (here)

Proxmox web frontend console

Proxmox's web frontend is not a native component of OpenVZ, nevertheless it is a very useful companion, and I deeply recommend to who ever is willing to adopt OpenVZ, to concider Proxmox. If you're not convinced check my small video demo.
For most of it its Proxmox usage is strait forward. The only issue I had was with the embedded vnc java applet that provides a vt100 entry point to virtual machines. While on the browser side Proxmod uses a traditional vncviewer on the server side it uses vncterm. I use VNC frequently, but  never seen vncterm before. Vncterm is a contribution from Proxmox to VNC, it is small vt100 emulation that can be render remotely thought a vncviewer, and does not requirer X11 to be installed on the machine.
When clicking in your browser on the "console" tab of your Proxmox GUI, it starts a vncterm process in your hypervisor. Note that because vncterm is started at the hypervisor level, neither you need vncterm to be installed in each virtual machines, neither you need a working network on targeted VM. If you check your huperviser process after click in console you should see.
ps -ef | grep vncterm
/usr/bin/vncterm -rfbport 5900 -passwdfile rm:/tmp/.vncpwfile.7355 -timeout 1 -c /usr/sbin/vzctl enter 101
 
The only bad news is that vncterm does not leverage any well known ports as tcp:22 or tcp:443, in fact it takes any available port on the hypervisor in range 5900-6000. This mean that you need at least to have port 5900 open on the hypervisor, having only one port (5900) should allow you to have one active vncterm active at a time.
A big thank to  Dietmar from Proxmox who provided me with a very quick and very efficient support.

Limitations, bugs and tips.

Limitations/ bugs.
I did not found any significant bugs on OpenVZ, nevertheless Ubuntu template as shipped by OVH need following fix, or SSH from within a template will not work. My system is up and running since summer and I never had any trouble. The only few things I do after each zone creation is to add two missing quite useful pseudo devices, update my /etc/hosts and add required packages with apt-get.
  • No /dev/tty           -> "vzctl exec MY-VIRTUAL-MACHINE-ID mknod /dev/tty c 5 0"
  • No /dev/random -> "vzctl exec MY-VIRTUAL-MACHINE-ID mknod /dev/random c 1 8"
  • No /dev/console
Tips:
  • When checking processes list on the hypervisor with "ps -ef" you see every processes, including the one running on VMs. As a result a "pkill -9 apache2" at hypervisor level will kill every single apache running on every VMs. Needs to be know Cool
  • If a VM restart fails (ex: you messed up network config, break some init script, not enough RAM, ....). While obviously you cannot log on your broken VM !!! You can still try to enter the VM from the hypervisor with "vzctl enter VM-ID" command. Example: your "syslogd" did not start, as this the1st process to be started by init-rc, it locks everything else and especially "sshd" which is usually the second one to be started. As you have no SSHD you cannot connect with SSH, obvious isn't it ? You can then use the "vzctl enter VM-ID" doing a "ps -ef" you should see a pocess like "/etc/init/rc 2" as init is not finished. You can then restart your syslogd with "/etc/init.d/sysklogd" or what ever is necessary to fix your problem.
  • if you can reach a VM from the outside, but cannot reach internet from that VM, then you need to check your "CreateZone" rule.  Double check that your network mask is compliant with your VM local IP adresse.  Example (createZone ... EXT=IP1  INT=10.10.1.0 MASK=255.255.255.0) will map any VMs with local IP 10.10.1.xxx in IP1, when (createZone ... EXT=IP2  INT=10.10.1.2 MASK=255.255.255.255) will only map 10.10.1.2 inside IP2. Note network differences with MASK and INT with/without zero at the end. Zone network mask defines outgoing policies, and a wrong definition will not impact incoming requests, but will prevent outgoing requests to work.
  • when learning how to handle multiple public IP addresses, to forward a dedicated intenet port  from a specific pubic IP to an internal VM's IP/port, do not start with port "22" but with something less critical like 80 or 25. An error on port "22" may prevent SSH access to your hypervisor, forcing you to reboot your full server to gain back SSH access. The other good practice is to launch a batch that will stop your firewall automatically after 10mn "(sleep 600; Fridu-firewall.script stop)&  Fridu-firewall.script start)" like than even if you kill your SSH access, it is only for 10mn Innocent

Contact me.

Please let's your comment, and if you find any bug or improvement please let me know.
 
 
 
Comments (18)
Vz et Kvm
18 Sunday, 28 February 2010 21:06
lazag
Bonjour,
J'ai solutionné mon soucis. Je ne sais pas si la configuration est bonne mais elle semble fonctionner.
Pour situer j'ai 1 PC derrière une Freebox avec Proxmox qui tourne.
J'ai créé des VM avec kvm montées en bridge vmbr0.
Coté VM-firewall mon IP Public et une zone avec INT=eth0 et BR=vmbr0. Les IP des ces VM sont distribuées par le DHCP de la Freebox. Tout le monde se voit.
Ensuite j'ai créé une VM avec OpenVZ (réseau venet0). L'IP n'a rien à voir avec celle fournie par la Freebox (10.10.101.x, au lieu de 192.168.0.x).
Coté VM-firewall une nouvelle IP, celle de Proxmox, une nouvelle zone avec INT=vmbr0 et BR=venet0. Ces VM voient tout le monde directement avec les IP, par contre les autres y accèdent via les ports forwardés.
A+
lazag
=====> Fulup Respond
C'est pas une config classique et VM-firewall n'a pas été concu pour ca, mais au final il ne fait que générer des iptables, et il n'y a donc aucune raison pour ce que ca ne fonctionne pas.
Vz et Kvm
17 Wednesday, 24 February 2010 17:16
lazag
Bonjour,
Tout d'abord merci !
J'ai utilisé VM-firewall avec des VM Kvm en bridge (je suis derrière un routeur(box)). No pb. Perfect. Merci encore.
Je voudrais maintenant utiliser le conteneur VZ pour profiter des avantages indiqués. Le pb est que je ne parviens pas à avoir accès ni au net ni aux autres VM kvm.
J'ai créé une nouvelle zone avec même NIC, EXT, mais un INT dédié et en précisant le BR à venet0.
J'ai essayé en indiquant une nouvelle IP, IP du réseau local derrière le routeur, pour cette nouvelle zone mais cela ne fonctionne pas mieux..
Comment puis-je faire ?
Par avance merci pour votre réponse.
Cdlt
lazag
===> Réponse Fulup ===
Pas facile de debuger une config réseau sans voir le schéma exact. Ceci dit il suffit de suivre le guide à http://www.fridu.org/fulup-samples-a-debug/86-vm-firewall-sample-debug un probleme reseau c'est jamais bien compliqué, il suffit de suivre les paquets à la trace.
Question sur le reverse proxy
16 Wednesday, 24 February 2010 16:38
Didier
Bonjour,
Encore bravo pour le partage de cette belle config.
D'après le schéma, si j'ai bien compris, le reverse proxy pound est installé sur sur l'hyperviseur, j'aurais tendance à sécuriser au maximum l'hyperviseur en lui affectant le moins de taches possible et donc mettre le réverse proxy dans une VM et le routage du port 80 vers cette VM.
Donc est-ce une autre solution ou j'oublie quelque chose ?
Merci
===> Réponse Fulup ===
On peut effectivement faire tourner le proxy sur une VM dédié aux service communs (proxy, dns, MTA, ...) en redirigent le port 80 sur la VM en question, c'est d'ailleurs ce que je fait avec le port 25 pour tous les services de messagerie. J'ai pas fait comme ça, mais c'est effectivement une bonne idée, car ça évite de toucher à l'hyperviseur pour changer une config sur le reverse proxy.
Merci quelle belle présentation
15 Tuesday, 23 February 2010 23:46
Gaël
Bonjour et bravo 28 minutes qui ont retenue toute mon attention
Je voulais jsutement faire exactement la même chose pour les mêmes raisons à savoir le manque d'ip publique et éviter d'ajouter des frais en ip failover. La dernière partie qui montre comment faire tourner des services en port 80 solutionne mon problème pour les entreprises mais au passage aussi des problemes personnels merci encore.
J'ai une question si vous avez le temps, si je veux faire tourner x serveurs *** sur proxmox est il judicieux de faire comme dans votre présentation ou la virtualisation n'est elle pas nécessaire ?
Merci d'avance
====
On peut très bien faire tourner plusieurs serveurs ××× sans virtualisation, si c'est la seule chose qui doit tourner sur les VM alors ca me semble un peu surdimensionné.
le net sur la VM
14 Sunday, 10 January 2010 11:06
Valoo
Bonjour,
J'ai vraiment apprécié cette video aussi bien sur la forme que sur le fond.
Sinon, une question toute bête, comment pouvoir sortir sur le net une fois la VM créée pour faire simplement une mise à jour (apt-get update)? C'est sur l'hyperviseur qu'il faut router?
Merci par avance.
Cordialement.
Valoo
====> Réponse Fulup =====
Oui, les VMs accèdent à Internet de manière transparente via l'Hyperviseur. C'est la zone dans laquelle est placée la VM qui définie l'IP public utilisé pour les flux sortants. Par défauts les VMs on accès à Internet sans limite, si leur interdire complètement l'accès internet n'a pas grand sens, on peut par contre comprendre l'intéret de bloquer certain port en sortie (ex: port 25).
NTP Server
13 Tuesday, 08 December 2009 19:57
Matt
Hello,
I would like to set up in VE a ntp server (i use debian minimal p_w_picpath).
How can i do that ?
Thanks
Matt
===> Fulup Respond =======
* If you want an NTP serveur to serve all your VE, just run it, in one VE, and point the other one to the choosen VE.
* if you want an NTP server to serve Internet, use a normal rule to forward UDP packet to the choosen VE.
CreateApp NAME=NTP ZONE=???? EXT=udp:123 INT=VE-IP:123
Thank you it was good 20 minutes
12 Wednesday, 02 September 2009 18:28
Ivan
Thank you! It was very intresting presentation, and I hope will give it a try.
Waitig for new interestiong matirials.
How to check how much traffic OpenVZ VMs are using?
11 Saturday, 13 June 2009 20:42
Tom
1)How can we see how much traffic OpenVZ VMs are using?
2)How can we see how much traffic Xen VMs are using?
===> Fulup respond
Never really search for this information, but I don't see why ntop http://www.ntop.org should not work.
Super demo de Proxmox
10 Tuesday, 09 June 2009 17:34
Jean
Seulement te remercie pour partager tes conaissances de linux, et en ce cas de Proxmox. Cette Demo m'a été vraiment interesante pour comprendre la base de la virtualization.

Merci beaucoup.
your firewall video
9 Tuesday, 19 May 2009 23:30
j jammer
Very insightful. Thank you.. although I may watch it one or two more times it is extremely helpful in understanding the methodology of setup.
What about Open×××?
8 Wednesday, 06 May 2009 13:58
nicolas
Hi Fulup,
Many Thanks for your fast reply on my previous post. I was wandering about your Open××× usage in such an architecture.
Could you tell me if I'm wrong? Basically your open××× usage in such an architecture saves you from one SSH to your hypervisor before being able to SSH your zones ?
I mean, thanks to Open×××, you connect to your zone directly from your remote workstation. If I do not install Open×××, all I need to get the same result is to first connect in SSH to your supervisor using the public IP address (port-SSH) and from then I get the possibility to SSH any zones running onto the supervisor.
Did I mist something and are there other interests of using a ××× connection in such an architecture?
PS: Sorry if it's a stupid question. I'm not an expert and I try to simplify the architecture as much as possible without loosing intersest of tool you introduced through this page. I'm BTW going to setup a similar architecture in the coming weeks.
Thanks in advance and congratulation for your very interesting topics
Nicolas
=========== Fulup respond ========
Open××× save you more than one SSH port. If SSH is the only issue, you should use portforwarding to a given VZ bypassing the hypervisor. Now with Open××× you get access to any port of your VZ, which is equivalent of having a new machine attache to your local network.
Nice tutorial / Little question
7 Tuesday, 05 May 2009 15:10
Nicolas
Hi Fulup,
Thanks very much for this nice tutorial. It really helps understanding your architecture.
I have a question regarding the usage of the reverse proxy. Technically speaking, why not using the ip failover supplied by OVH instead. Why not simply assigning an IP failover to each virtual machine for delivering the requested services to the internet?
This question comes because I"ll probably set up such an environment on a dedicated EG or MG model from OVH. These infrastructure is supplied with some free extra ip failover. I was just wandering what was the difference of the two solutions in term of security, performance or whatever.
Best Regards, Nicolas
=========== Fulup Respond =========
You make a perfectly valid comment, at fridu we only have one fail-over IP, but I use it for exact purpose, it points to a special VZ. If you point one full zone toward a given virtual machine, it achieves exactly what you describe. From a performance and security point of view it is equivalent.
Changing IP address of Virtual Machine
6 Thursday, 26 February 2009 16:52
Ap.Muthu
The ProxMox v1.1 GUI allows change of IP of the host machine. How do we change the iP address of a VM created using the templates like Joomla!, etc. Also how is it done for stabdard CentOS 4/5 installs.
While creating the VM say Joomla, we assign 192.168.10.1 instead of the default 127.0.0.1, we see it assigned to venet0:0 in the outpuit of the ifconfig command. Even setting another IP in a new file called etc/network/interfaces.new and restarting the VM does not set it and the it still shows the old IP only.
===== Fulup Respond =====
This is unfortunately for you out of scope. Fridu firewall is independent from how Joomla/OpenVZ handle IP addresses.This being said if you change any IP within your internal virtual network, you MUST update your Firewall or nothing will work anymore, nevertheless and despite of this, you might be interested to know that fridu.org run Joomla within a Proxmox/OpenVZ container.
Informative and insightful.
5 Monday, 23 February 2009 11:24
wese
I'm using OpenVZ now for 2 Years and was using KVM to run Windows when i was testing some stuff.
Although a great writeup with lots of information for virtualization beginners.
On thing i found out over the time, when you have an high traffic server it's better to put it directly into the internet, if you got more ip addresses available, because the dnat seems to slowdown a little, and setup iptables inside the VE.
- wese
========== Fulup respond =============
I've trouble to understand how IPTABLES could slow down your connectivity. At least with a 100Mbit/s connectivity, I do not see any bottle neck created by routing+visualization. This being said with giga interface, it might be different. Have you done some benchmarks to highlight where to find bottlenecks ?
Thanks for the great tutorial
4 Sunday, 21 December 2008 21:13
Fedir
Hello, Fulup,
I would like to thank You for the great tutorial, I'm honestly tried to follow each recommendation, reading articles on fridu.org, and watching the videos(it was very nice way to spend the weekend by the way). At this moment I could make machines, I've successfully installed Fridu-firewall and pound reverse, just there's questions about network configuration. My virtual machines couldn't get to any address to the Internet, only to internal addresses, or basic server IP.
In Your videos, when You create machines we could notice, what in Your proxmox "Cluster Node" You have internal 10.10.100.x address. In the same moment, in my proxmox installation (the same model - kimsufi from OVH), I've primary 91.x.x.x address in "Cluster node" field. Is it normal? Should I make some special cluster modifications? What is the necessary modifications to parameter 2 IP addresses with proxmox(1 primary and 1 fail-over)?
Thank You very much,
Fedir.
===== Fulup Anwser =======
The only thing that could prevent your guest to initiate request to internet, is an error on your zone configuration, or a DNS issue.
-> Internet-to-guest uses: NIC=eth0 + EXT=xxxx where xxx should be one of your public IP and eth0 your real internet NIC.
-> Guest-to-internet uses: BR=vnet0[should be OpenVZ virtual interface] INT=xx.xx.xx.0 MASK=255.255.255.0[network mask for zone's guest(s), not the IP address !!!]
-> DNS is handle by DnsMasq and my sample should work for you. WARNING: you must nevertheless make sure that guests /etc/resolv.conf point onto the right NameServer. If dns is running in your Hypervisor like in my config, your guest nameserver in /etc/resolv.conf should point onto your main PUBLIC-IP, if your DNS MASK is in a guest, then you should point on that given guest.
** If NIC|EXT is wrong port forward wont work, and you will not access your guest from Internet as defined in CreateApp
** if BR|INT|MASK is wrong your guest will not access internet (or will do it with a wrong public IP)
Note: 91.x.x.x is not a private and should not be use for guest IP, this behing said it will not prevent your config from working :)
Master tutorial
3 Saturday, 22 November 2008 03:31
Trevor-Williams
I have been waiting for this type of setup for a very long time and finally it's here. Thanks. the power of opensource. I tried the firewall setup stock, however. I have not access to the VM's nor the hypervisor coming in from the outside. Which section in the firewall will allow such access to the VMs from the internet. 
Fulup answer: the firewall as now a full dedicated page [here] which opefully should help.This being said, for incommong data stream typical error is either a wrong Zone, or a wrong AppCreate [often a simple a upper/lowercase syntax error in Label]. The best way is to tcpdump moving from your external NIC interface, to vnet hypervisor interface to close on VMs  vnet0. If it still not working please send me your config.
Whoa!
2 Friday, 10 October 2008 10:46
Felix
Hey Fulup!

Sounds like you have too much time on your hands :-) :-)
Seriously, this is really exciting stuff. I've just skimmed over your article, will probably look at your action film one of these evenings.
The "no console" limitation sounds a bit scary. I guess that will probably be added pretty quickly.
Whilst there are many advantages/disadvantages of using a full VM/zones approach, I am wondering whether there will be a future hybrid solution where we may have very tiny VMs that implement a completely isolated network stack, and then attach those to zones. Or whether there is a hybrid approach that will be integrated in zone technology - this would allow you the best of both worlds.

------------------------- Fulup element of respond to Felix ------------------------------
*** Bad *** I do not see the "no console" issue being addressed in the near future:( this being said the fact you can enter a VM, even if boot failed "cf: my bug/tip section" provides a working hotfix. After entering your not booted VM with "vzctl" you can launch "/etc/init/rc runlevel" manually to check what is breaking your automatic boot process.
*** Good *** For hybrid light/heavy containers this is already working :) For my personal hosting usage Linux is more than enough, explaining why I do not leverage hybrid containers. OpenVZ zones works smoothly with KVM full virtualization, this allow you to have Linux Zones sharing a unique kernel mixed with Solaris or Windows instance that run a private kernel. In fact the cohabitation is fully integrated inside Proxmox web GUI.
------
Very nice demo/tutorial on OpenVZ
1 Wednesday, 08 October 2008 09:08
Victor

Hi Fulup,


This is a great demo/tutorial on OpenVZ, very useful.


I will definitely give it a try, but it looks like it is just the thing that I need as well.


I did not know about pound either, I have used Apache for Reverse Proxying, but sometimes not all the Apache features are required. So pound is a good option as well.