Quick Start Guide
Key Concepts
- vCluster Control Plane: Contains a Kubernetes API server, a controller manager, a data store mount and the Syncer.
- Syncing resources: vCluster runs your workloads by syncing pods from the virtual cluster to the host cluster.
- Pod scheduling: By default, vCluster reuses the host cluster scheduler to schedule workloads.
- Storage: You can use the host's storage classes without the need to create them in the virtual cluster.
- Networking: vCluster syncs resources such as Service and Ingress resources from the virtual to the host cluster.
- Nodes: By default, vCluster creates pseudo nodes for every pod
spec.nodeName
in the virtual cluster.
Before you begin
You need the following:
- Access to a Kubernetes v1.26+ cluster with permissions to deploy applications into a namespace. This is the host cluster for vCluster deployment.
- kubectl CLI.
Deploy vCluster
Deploy a vCluster instance called my-vcluster
to namespace team-x
. The installation instructions use vCluster CLI, but there are other installation options as well.
-
Install the vCluster CLI.
- Homebrew
- Mac (Intel/AMD)
- Mac (Silicon/ARM)
- Linux (AMD)
- Linux (ARM)
- Windows Powershell
brew install loft-sh/tap/vcluster-experimental
If you installed the CLI using
brew install vcluster
, you shouldbrew uninstall vcluster
and then install the experimental version. The binaries in the tap are signed using the Sigstore framework for enhanced security.curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/v0.20.0-beta.1/vcluster-darwin-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/v0.20.0-beta.1/vcluster-darwin-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/v0.20.0-beta.1/vcluster-linux-amd64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster
curl -L -o vcluster "https://github.com/loft-sh/vcluster/releases/download/v0.20.0-beta.1/vcluster-linux-arm64" && sudo install -c -m 0755 vcluster /usr/local/bin && rm -f vcluster
md -Force "$Env:APPDATA\vcluster"; [System.Net.ServicePointManager]::SecurityProtocol = [System.Net.SecurityProtocolType]'Tls,Tls11,Tls12';
Invoke-WebRequest -URI "https://github.com/loft-sh/vcluster/releases/download/v0.20.0-beta.1/vcluster-windows-amd64.exe" -o $Env:APPDATA\vcluster\vcluster.exe;
$env:Path += ";" + $Env:APPDATA + "\vcluster";
[Environment]::SetEnvironmentVariable("Path", $env:Path, [System.EnvironmentVariableTarget]::User);Reboot RequiredYou may need to reboot your computer to use the CLI due to changes to the PATH variable (see below).
Check Environment Variable $PATHLine 4 of this install script adds the install directory
%APPDATA%\vcluster
to the$PATH
environment variable. This is only effective for the current Powershell session, i.e. when opening a new terminal window,vcluster
may not be found.Make sure to add the folder
%APPDATA%\vcluster
to thePATH
environment variable after installing vcluster CLI via Powershell. Afterward, a reboot might be necessary.Alternatively, you can download the binary for your platform from the GitHub Releases page and add this binary to your PATH.
To confirm that vCluster CLI is successfully installed, test via:
vcluster --version
-
(Optional) Configure vCluster with
vcluster.yaml
.Create a file called
vcluster.yaml
with extra configuration for vCluster. Refer to thevcluster.yaml
reference docs to explore all configuration options. -
Deploy vCluster
- vCluster CLI
- Helm
- kubectl
- Terraform
- Argo CD
- Cluster API
-
Deploy vCluster.
- Default config
- Custom config
vcluster create my-vcluster --namespace team-x
Include your
vcluster-yaml
config file.vcluster create my-vcluster --namespace team-x --values vcluster.yaml
When the installation finishes, you are automatically connected to the virtual cluster and can run kubectl commands within the virtual cluster.
-
Create the "team-x" namespace.
kubectl create namespace team-x
-
Deploy vCluster.
Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
- Default config
- Custom config
helm upgrade --install my-vcluster vcluster \
--repo https://charts.loft.sh \
--namespace team-x \
--repository-config='' \
--version 0.20.0-beta.6Include your
vcluster-yaml
config file.helm upgrade --install my-vcluster vcluster \
--values vcluster.yaml \
--repo https://charts.loft.sh \
--namespace team-x \
--repository-config='' \
--version 0.20.0-beta.6
This uses Helm to generate the Kubernetes deployment files before applying with
kubectl
. Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.-
Create the "team-x" namespace.
kubectl create namespace team-x
-
Deploy vCluster.
- Default config
- Custom config
helm template my-vcluster vcluster --repo https://charts.loft.sh --version 0.20.0-beta.6 -n team-x | kubectl apply -f -
Include your
vcluster-yaml
config file.helm template my-vcluster vcluster --repo https://charts.loft.sh --version 0.20.0-beta.6 -n team-x -f vcluster.yaml | kubectl apply -f -
-
Create a
main.tf
file.Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
- Default config
- Custom config
provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
resource "helm_release" "my_vcluster" {
name = "my-vcluster"
namespace = "team-x"
create_namespace = true
repository = "https://charts.loft.sh"
chart = "vcluster"
version = "0.20.0-beta.6"
}Include your
vcluster-yaml
config file.provider "helm" {
kubernetes {
config_path = "~/.kube/config"
}
}
resource "helm_release" "my_vcluster" {
name = "my-vcluster"
namespace = "team-x"
create_namespace = true
repository = "https://charts.loft.sh"
chart = "vcluster"
version = "0.20.0-beta.6"
values = [
file("${path.module}/vcluster.yaml")
]
} -
Install the required Helm provider.
terraform init
-
Generate a plan.
terraform plan
Verify that the provider can access your cluster and that the proposed changes are correct.
-
Deploy vCluster.
terraform apply
To deploy vCluster using ArgoCD, you need the following files:
vcluster.yaml
for your vCluster configuration. This is optional if you are deploying with default configuration.my-vcluster.yaml
for your ArgoCDApplication
definition.
-
Create the ArgoCD
Application
filemy-vcluster.yaml
.Deployment uses the vCluster Helm chart. Since the v0.20 release is a beta, you need to specify the
targetRevision
or Helm uses the latest GA version.- Default config
- Custom config
apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-vcluster
namespace: argocd
spec:
project: default
source:
chart: vcluster
repoURL: https://charts.loft.sh
targetRevision: 0.20.0-beta.6
helm:
releaseName: my-vcluster
destination:
server: https://kubernetes.default.svc
namespace: team-xUse your
vcluster.yaml
file to pass the chart values.apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: my-vcluster
namespace: argocd
spec:
project: default
source:
chart: vcluster
repoURL: https://charts.loft.sh
targetRevision: 0.20.0-beta.6
helm:
releaseName: my-vcluster
valueFiles:
- vcluster.yaml
destination:
server: https://kubernetes.default.svc
namespace: team-x -
Commit and push these files to your configured ArgoCD repository and synchronize it with your configured cluster.
-
Install the
clusterctl
CLI. -
Install the vCluster provider.
clusterctl init --infrastructure vcluster
-
Generate the required manifests and apply using
clusterctl generate cluster
andkubectl
.Deployment uses the vCluster Helm chart. Since the v0.20 release is a beta, you need to export the chart version, or Helm uses the latest GA version.
- Default config
- Custom config
export CLUSTER_NAME=my-vcluster
export CLUSTER_NAMESPACE=team-x
export KUBERNETES_VERSION=1.29.3
export CHART_VERSION=0.20.0-beta.6
kubectl create namespace ${CLUSTER_NAMESPACE}
clusterctl generate cluster ${CLUSTER_NAME} \
--infrastructure vcluster \
--kubernetes-version ${KUBERNETES_VERSION} \
--target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f -Include your
vcluster-yaml
config file.export CLUSTER_NAME=my-vcluster
export CLUSTER_NAMESPACE=team-x
export KUBERNETES_VERSION=1.29.3
export HELM_VALUES=$(cat vcluster.yaml)
export CHART_VERSION=0.20.0-beta.6
kubectl create namespace ${CLUSTER_NAMESPACE}
clusterctl generate cluster ${CLUSTER_NAME} \
--infrastructure vcluster \
--kubernetes-version ${KUBERNETES_VERSION} \
--target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f - -
Execute the following command to wait for the vCluster custom resource to report a ready status.
kubectl wait --for=condition=ready vcluster -n $CLUSTER_NAMESPACE $CLUSTER_NAME --timeout=300s
Learn more about Cluster API Provider for vCluster
Use your virtual cluster
Interacting with a virtual cluster is very similar to using a standard Kubernetes cluster.
Connect
- vCluster CLI
- Helm
- kubectl
- Terraform
- Argo CD
- Cluster API
Connect to your virtual cluster:
vcluster connect my-vcluster --namespace team-x
Disconnect from your virtual cluster and switch back to the original kube context:
vcluster disconnect
If you can't use the vCluster CLI, retrieve the vCluster kubeconfig from a secret that is created automatically in the vCluster namespace.
The secret is prefixed with vc-
and ends with the vCluster name, so a vCluster called my-vcluster
in namespace team-x
would create a secret called vc-my-vcluster
in the namespace team-x
.
Switch to your host cluster's context before running this command:
kubectl get secret vc-my-vcluster -n team-x --template={{.data.config}} | base64 -d
The secret holds a kubeconfig in this format:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t...
server: https://localhost:8443
name: local
contexts:
- context:
cluster: local
namespace: default
user: user
name: Default
current-context: Default
kind: Config
users:
- name: user
user:
client-certificate-data: LS0tLS...
client-key-data: LS0tLS...
If you can't use the vCluster CLI, retrieve the vCluster kubeconfig from a secret that is created automatically in the vCluster namespace.
The secret is prefixed with vc-
and ends with the vCluster name, so a vCluster called my-vcluster
in namespace team-x
would create a secret called vc-my-vcluster
in the namespace team-x
.
Switch to your host cluster's context before running this command:
kubectl get secret vc-my-vcluster -n team-x --template={{.data.config}} | base64 -d
The secret holds a kubeconfig in this format:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t...
server: https://localhost:8443
name: local
contexts:
- context:
cluster: local
namespace: default
user: user
name: Default
current-context: Default
kind: Config
users:
- name: user
user:
client-certificate-data: LS0tLS...
client-key-data: LS0tLS...
If you can't use the vCluster CLI, retrieve the vCluster kubeconfig from a secret that is created automatically in the vCluster namespace.
The secret is prefixed with vc-
and ends with the vCluster name, so a vCluster called my-vcluster
in namespace team-x
would create a secret called vc-my-vcluster
in the namespace team-x
.
Switch to your host cluster's context before running this command:
kubectl get secret vc-my-vcluster -n team-x --template={{.data.config}} | base64 -d
The secret holds a kubeconfig in this format:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t...
server: https://localhost:8443
name: local
contexts:
- context:
cluster: local
namespace: default
user: user
name: Default
current-context: Default
kind: Config
users:
- name: user
user:
client-certificate-data: LS0tLS...
client-key-data: LS0tLS...
After you retrieve the secret, connect to and manage your virtual cluster using Terraform. Use this secret to configure the Kubernetes Terraform provider.
-
Create a new Terraform file in a separate folder from the Terraform file you used to deploy vCluster:
provider "kubernetes" {
config_path = "~/.kube/config"
}
data "kubernetes_secret" "vc_my_vcluster" {
metadata {
name = "vc-my-vcluster"
namespace = "team-x"
}
}
provider "kubernetes" {
host = "https://localhost:8443"
client_certificate = data.kubernetes_secret.vc_my_vcluster.data["client-certificate"]
client_key = data.kubernetes_secret.vc_my_vcluster.data["client-key"]
cluster_ca_certificate = data.kubernetes_secret.vc_my_vcluster.data["certificate-authority"]
alias = "my-vcluster"
}
data "kubernetes_all_namespaces" "host_namespaces" {}
output "host_namespaces" {
value = data.kubernetes_all_namespaces.host_namespaces.namespaces
}
data "kubernetes_all_namespaces" "my_vcluster_namespaces" {
provider = kubernetes.my-vcluster
}
output "my_vcluster_namespaces" {
value = data.kubernetes_all_namespaces.my_vcluster_namespaces.namespaces
}- The datasource
data.kubernetes_secret.vc_my_vcluster
readsmy-vcluster
's kubeconfig secret from the host cluster. - The second Kubernetes provider declaration uses the secret data to configure a second Kubernetes provider with the alias
my-vcluster
- The datasource
-
Since you use host
https://localhost:8443
in this example, you need toport-forward
to access the virtual cluster:kubectl port-forward -n team-x service/my-vcluster 8443:443
-
Install the Kubernetes Terraform provider:
terraform init
-
Change directory to the folder where you created this new Terraform file and run:
terraform plan
The resulting output shows that there are changes to the outputs for
host_namespaces
andmy_vcluster_namespaces
.
If you can't use the vCluster CLI, retrieve the vCluster kubeconfig from a secret that is created automatically in the vCluster namespace.
The secret is prefixed with vc-
and ends with the vCluster name, so a vCluster called my-vcluster
in namespace team-x
would create a secret called vc-my-vcluster
in the namespace team-x
.
Switch to your host cluster's context before running this command:
kubectl get secret vc-my-vcluster -n team-x --template={{.data.config}} | base64 -d
The secret holds a kubeconfig in this format:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t...
server: https://localhost:8443
name: local
contexts:
- context:
cluster: local
namespace: default
user: user
name: Default
current-context: Default
kind: Config
users:
- name: user
user:
client-certificate-data: LS0tLS...
client-key-data: LS0tLS...
After your my-vcluster
application is synced, you may retrieve the kubeconfig using kubectl. Remember to use the namespace that corresponds to your ArgoCD Application
's destination namespace.
Use the clusterctl get kubeconfig
command to retrieve the kubeconfig. In this example, you get the cluster's kubeconfig, write it to a file, and then use it with kubectl
to list the virtual cluster's namespaces:
export CLUSTER_NAME=my-vcluster
export CLUSTER_NAMESPACE=my-vcluster
clusterctl get kubeconfig ${CLUSTER_NAME} --namespace ${CLUSTER_NAMESPACE} > ./kubeconfig.yaml
kubectl get --kubeconfig ./kubeconfig.yaml get namespaces
The secret is prefixed with vc-
and ends with the vCluster name, so a vCluster called my-vcluster
in namespace team-x
would create a secret called vc-my-vcluster
in the namespace team-x
.
Switch to your host cluster's context before running this command:
kubectl get secret vc-my-vcluster -n team-x --template={{.data.config}} | base64 -d
The secret holds a kubeconfig in this format:
apiVersion: v1
clusters:
- cluster:
certificate-authority-data: LS0t...
server: https://localhost:8443
name: local
contexts:
- context:
cluster: local
namespace: default
user: user
name: Default
current-context: Default
kind: Config
users:
- name: user
user:
client-certificate-data: LS0tLS...
client-key-data: LS0tLS...
Deploy resources inside the virtual cluster
To illustrate what happens in the host cluster, create a namespace and deploy NGINX in the virtual cluster.
kubectl create namespace demo-nginx
kubectl create deployment ngnix-deployment -n demo-nginx --image=nginx -r 2
Check that this deployment creates two pods inside the virtual cluster.
kubectl get pods -n demo-nginx
Note: Most resources inside your virtual cluster only exist in your virtual cluster and not in the host cluster.
To verify this, perform these steps:
-
Disconnect from the current virtual cluster and switch back to the host context.
vcluster disconnect
-
Check namespaces in the host cluster.
kubectl get namespaces
Output is similar to:
NAME STATUS AGE
default Active 35m
kube-public Active 35m
kube-system Active 35m
team-x Active 30m -
Look for the
nginx-deployment
deployment.kubectl get deployments -n team-x
Notice that this resource does NOT exist in the host cluster because it normally does not get synced from the virtual to the host cluster since its typically not required to run workloads on the host cluster.
-
Now, look for the NGINX pods.
kubectl get pods -n team-x
Output is similar to:
coredns-cb5ccc67f-kqwmx-x-kube-system-x-my-vcluster 1/1 Running 0 34m
my-vcluster-0 1/1 Running 0 34m
nginx-deployment-6d6565499c-cbv4w-x-demo-nginx-x-my-vcluster 1/1 Running 0 20m
nginx-deployment-6d6565499c-s7g8z-x-demo-nginx-x-my-vcluster 1/1 Running 0 20mYou can see from the output that the the two NGINX pods exist in the host cluster. The vCluster
my-cluster-0
pod is the vCluster control plane.K8s Resource RenamingTo prevent collisions, the pod names and their namespaces are rewritten by vCluster during the sync process from the virtual cluster to the host cluster.
Explore features
Configure features in a vcluster.yaml
file. These examples show you how to configure some popular features. See the vcluser.yaml
configuration reference for how to configure additional features.
Expose the vCluster control plane
There are multiple ways of granting access to the vCluster control plane for external applications like kubectl. The following approach uses an Ingress, but you can also do it via ServiceAccount, LoadBalancer, and NodePort.
-
Modify
vcluster.yaml
so that vCluster creates the required Ingress resource.controlPlane:
ingress:
enabled: true
host: VCLUSTER_HOSTNAME
proxy:
extraSANs:
- VCLUSTER_HOSTNAMEReplace
VCLUSTER_HOSTNAME
with your vCluster instance's hostname. -
Apply your changes.
- vCluster CLI
- Helm
- kubectl
- Terraform
- Argo CD
- Cluster API
vcluster create --upgrade VCLUSTER_NAME -n VCLUSTER_NAMESPACE -f vcluster.yaml
Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
helm upgrade --install VCLUSTER_NAME vcluster \
--values vcluster.yaml \
--repo https://charts.loft.sh \
--namespace VCLUSTER_NAMESPACE \
--repository-config='' \
--version 0.20.0-beta.6Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
helm template VCLUSTER_NAME vcluster --repo https://charts.loft.sh --version 0.20.0-beta.6 -n VCLUSTER_NAMESPACE -f vcluster.yaml | kubectl apply -f -
Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Apply vCluster config changes by editing the
vcluster.yaml
file and runningterraform plan
:terraform plan
Review the planned changes and apply them if they look appropriate:
terraform apply
Add your
vcluster.yaml
config file to thevalueFiles
array in your ArgoCDApplication
file.Since the v0.20 release is a beta, you need to specify the
targetRevision
or Helm uses the latest GA version.apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: VCLUSTER_NAME
namespace: argocd
spec:
project: default
source:
chart: vcluster
repoURL: https://charts.loft.sh
targetRevision: 0.20.0-beta.6
helm:
releaseName: VCLUSTER_NAME
valueFiles:
- vcluster.yaml
destination:
server: https://kubernetes.default.svc
namespace: VCLUSTER_NAMESPACEReplace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Apply Cluster API changes by regenerating the cluster custom resource using
clusterctl
.Since the v0.20 release is a beta, you need to export the chart version, or Helm uses the latest GA version.
export CLUSTER_NAME=VCLUSTER_NAME
export CLUSTER_NAMESPACE=VCLUSTER_NAMESPACE
export KUBERNETES_VERSION=1.29.3
export HELM_VALUES=$(cat vcluster.yaml)
export CHART_VERSION=0.20.0-beta.6
clusterctl generate cluster ${CLUSTER_NAME} \
--infrastructure vcluster \
--kubernetes-version ${KUBERNETES_VERSION} \
--target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f -Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
After the changes have been applied, wait for the vCluster custom resource to report a ready status:
kubectl wait --for=condition=ready vcluster -n $CLUSTER_NAMESPACE $CLUSTER_NAME --timeout=300s
Show real nodes
By default, vCluster syncs pseudo nodes from the host cluster to the virtual cluster. However, deploying a vCluster instance via the CLI on a local Kubernetes cluster automatically enables real node syncing, so you would not see a difference in this context.
Pseudo nodes only have real values for the CPU, architecture, and operating system, while everything else is randomly generated. Therefore, for use cases requiring real node information, you can opt to sync the real nodes into the virtual cluster.
-
Modify
vcluster.yaml
.sync:
fromHost:
nodes:
enabled: true -
Apply your changes.
- vCluster CLI
- Helm
- kubectl
- Terraform
- Argo CD
- Cluster API
vcluster create --upgrade VCLUSTER_NAME -n VCLUSTER_NAMESPACE -f vcluster.yaml
Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
helm upgrade --install VCLUSTER_NAME vcluster \
--values vcluster.yaml \
--repo https://charts.loft.sh \
--namespace VCLUSTER_NAMESPACE \
--repository-config='' \
--version 0.20.0-beta.6Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
helm template VCLUSTER_NAME vcluster --repo https://charts.loft.sh --version 0.20.0-beta.6 -n VCLUSTER_NAMESPACE -f vcluster.yaml | kubectl apply -f -
Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Apply vCluster config changes by editing the
vcluster.yaml
file and runningterraform plan
:terraform plan
Review the planned changes and apply them if they look appropriate:
terraform apply
Add your
vcluster.yaml
config file to thevalueFiles
array in your ArgoCDApplication
file.Since the v0.20 release is a beta, you need to specify the
targetRevision
or Helm uses the latest GA version.apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: VCLUSTER_NAME
namespace: argocd
spec:
project: default
source:
chart: vcluster
repoURL: https://charts.loft.sh
targetRevision: 0.20.0-beta.6
helm:
releaseName: VCLUSTER_NAME
valueFiles:
- vcluster.yaml
destination:
server: https://kubernetes.default.svc
namespace: VCLUSTER_NAMESPACEReplace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Apply Cluster API changes by regenerating the cluster custom resource using
clusterctl
.Since the v0.20 release is a beta, you need to export the chart version, or Helm uses the latest GA version.
export CLUSTER_NAME=VCLUSTER_NAME
export CLUSTER_NAMESPACE=VCLUSTER_NAMESPACE
export KUBERNETES_VERSION=1.29.3
export HELM_VALUES=$(cat vcluster.yaml)
export CHART_VERSION=0.20.0-beta.6
clusterctl generate cluster ${CLUSTER_NAME} \
--infrastructure vcluster \
--kubernetes-version ${KUBERNETES_VERSION} \
--target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f -Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
After the changes have been applied, wait for the vCluster custom resource to report a ready status:
kubectl wait --for=condition=ready vcluster -n $CLUSTER_NAMESPACE $CLUSTER_NAME --timeout=300s
Sync ingress from host to virtual
If you want to use an ingress controller from the host cluster inside your virtual cluster, enable IngressClass
syncing from host to virtual cluster.
-
Modify
vcluster.yaml
.sync:
fromHost:
ingressClasses:
enabled: true -
Apply your changes.
- vCluster CLI
- Helm
- kubectl
- Terraform
- Argo CD
- Cluster API
vcluster create --upgrade VCLUSTER_NAME -n VCLUSTER_NAMESPACE -f vcluster.yaml
Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
helm upgrade --install VCLUSTER_NAME vcluster \
--values vcluster.yaml \
--repo https://charts.loft.sh \
--namespace VCLUSTER_NAMESPACE \
--repository-config='' \
--version 0.20.0-beta.6Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
helm template VCLUSTER_NAME vcluster --repo https://charts.loft.sh --version 0.20.0-beta.6 -n VCLUSTER_NAMESPACE -f vcluster.yaml | kubectl apply -f -
Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Apply vCluster config changes by editing the
vcluster.yaml
file and runningterraform plan
:terraform plan
Review the planned changes and apply them if they look appropriate:
terraform apply
Add your
vcluster.yaml
config file to thevalueFiles
array in your ArgoCDApplication
file.Since the v0.20 release is a beta, you need to specify the
targetRevision
or Helm uses the latest GA version.apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: VCLUSTER_NAME
namespace: argocd
spec:
project: default
source:
chart: vcluster
repoURL: https://charts.loft.sh
targetRevision: 0.20.0-beta.6
helm:
releaseName: VCLUSTER_NAME
valueFiles:
- vcluster.yaml
destination:
server: https://kubernetes.default.svc
namespace: VCLUSTER_NAMESPACEReplace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Apply Cluster API changes by regenerating the cluster custom resource using
clusterctl
.Since the v0.20 release is a beta, you need to export the chart version, or Helm uses the latest GA version.
export CLUSTER_NAME=VCLUSTER_NAME
export CLUSTER_NAMESPACE=VCLUSTER_NAMESPACE
export KUBERNETES_VERSION=1.29.3
export HELM_VALUES=$(cat vcluster.yaml)
export CHART_VERSION=0.20.0-beta.6
clusterctl generate cluster ${CLUSTER_NAME} \
--infrastructure vcluster \
--kubernetes-version ${KUBERNETES_VERSION} \
--target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f -Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
After the changes have been applied, wait for the vCluster custom resource to report a ready status:
kubectl wait --for=condition=ready vcluster -n $CLUSTER_NAMESPACE $CLUSTER_NAME --timeout=300s
Sync ingress from virtual to host
Create an ingress in a virtual cluster to make a service in that virtual cluster available via a hostname/domain. Instead of having to run a separate ingress controller in each virtual cluster, sync the ingress resource to the host cluster so that the virtual cluster can use a shared ingress controller running in the host cluster.
-
Modify
vcluster.yaml
.sync:
toHost:
ingresses:
enabled: true -
Apply your changes.
- vCluster CLI
- Helm
- kubectl
- Terraform
- Argo CD
- Cluster API
vcluster create --upgrade VCLUSTER_NAME -n VCLUSTER_NAMESPACE -f vcluster.yaml
Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
helm upgrade --install VCLUSTER_NAME vcluster \
--values vcluster.yaml \
--repo https://charts.loft.sh \
--namespace VCLUSTER_NAMESPACE \
--repository-config='' \
--version 0.20.0-beta.6Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
helm template VCLUSTER_NAME vcluster --repo https://charts.loft.sh --version 0.20.0-beta.6 -n VCLUSTER_NAMESPACE -f vcluster.yaml | kubectl apply -f -
Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Apply vCluster config changes by editing the
vcluster.yaml
file and runningterraform plan
:terraform plan
Review the planned changes and apply them if they look appropriate:
terraform apply
Add your
vcluster.yaml
config file to thevalueFiles
array in your ArgoCDApplication
file.Since the v0.20 release is a beta, you need to specify the
targetRevision
or Helm uses the latest GA version.apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: VCLUSTER_NAME
namespace: argocd
spec:
project: default
source:
chart: vcluster
repoURL: https://charts.loft.sh
targetRevision: 0.20.0-beta.6
helm:
releaseName: VCLUSTER_NAME
valueFiles:
- vcluster.yaml
destination:
server: https://kubernetes.default.svc
namespace: VCLUSTER_NAMESPACEReplace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Apply Cluster API changes by regenerating the cluster custom resource using
clusterctl
.Since the v0.20 release is a beta, you need to export the chart version, or Helm uses the latest GA version.
export CLUSTER_NAME=VCLUSTER_NAME
export CLUSTER_NAMESPACE=VCLUSTER_NAMESPACE
export KUBERNETES_VERSION=1.29.3
export HELM_VALUES=$(cat vcluster.yaml)
export CHART_VERSION=0.20.0-beta.6
clusterctl generate cluster ${CLUSTER_NAME} \
--infrastructure vcluster \
--kubernetes-version ${KUBERNETES_VERSION} \
--target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f -Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
After the changes have been applied, wait for the vCluster custom resource to report a ready status:
kubectl wait --for=condition=ready vcluster -n $CLUSTER_NAMESPACE $CLUSTER_NAME --timeout=300s
Sync services from host to virtual cluster
In this example, you map a service my-host-service
in the namespace my-host-namespace
to the virtual cluster service my-virtual-service
inside the virtual cluster namespace team-x
.
-
Modify
vcluster.yaml
.replicateServices:
fromHost:
- from: my-host-namespace/my-host-service
to: team-x/my-virtual-service -
Apply your changes.
- vCluster CLI
- Helm
- kubectl
- Terraform
- Argo CD
- Cluster API
vcluster create --upgrade VCLUSTER_NAME -n VCLUSTER_NAMESPACE -f vcluster.yaml
Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
helm upgrade --install VCLUSTER_NAME vcluster \
--values vcluster.yaml \
--repo https://charts.loft.sh \
--namespace VCLUSTER_NAMESPACE \
--repository-config='' \
--version 0.20.0-beta.6Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Since the v0.20 release is a beta, you need to specify the version or Helm uses the latest GA version.
helm template VCLUSTER_NAME vcluster --repo https://charts.loft.sh --version 0.20.0-beta.6 -n VCLUSTER_NAMESPACE -f vcluster.yaml | kubectl apply -f -
Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Apply vCluster config changes by editing the
vcluster.yaml
file and runningterraform plan
:terraform plan
Review the planned changes and apply them if they look appropriate:
terraform apply
Add your
vcluster.yaml
config file to thevalueFiles
array in your ArgoCDApplication
file.Since the v0.20 release is a beta, you need to specify the
targetRevision
or Helm uses the latest GA version.apiVersion: argoproj.io/v1alpha1
kind: Application
metadata:
name: VCLUSTER_NAME
namespace: argocd
spec:
project: default
source:
chart: vcluster
repoURL: https://charts.loft.sh
targetRevision: 0.20.0-beta.6
helm:
releaseName: VCLUSTER_NAME
valueFiles:
- vcluster.yaml
destination:
server: https://kubernetes.default.svc
namespace: VCLUSTER_NAMESPACEReplace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
Apply Cluster API changes by regenerating the cluster custom resource using
clusterctl
.Since the v0.20 release is a beta, you need to export the chart version, or Helm uses the latest GA version.
export CLUSTER_NAME=VCLUSTER_NAME
export CLUSTER_NAMESPACE=VCLUSTER_NAMESPACE
export KUBERNETES_VERSION=1.29.3
export HELM_VALUES=$(cat vcluster.yaml)
export CHART_VERSION=0.20.0-beta.6
clusterctl generate cluster ${CLUSTER_NAME} \
--infrastructure vcluster \
--kubernetes-version ${KUBERNETES_VERSION} \
--target-namespace ${CLUSTER_NAMESPACE} | kubectl apply -f -Replace:
VCLUSTER_NAME
with your vCluster instance name.VCLUSTER_NAMESPACE
with the namespace where you deployed vCluster.
After the changes have been applied, wait for the vCluster custom resource to report a ready status:
kubectl wait --for=condition=ready vcluster -n $CLUSTER_NAMESPACE $CLUSTER_NAME --timeout=300s
Delete vCluster
Deleting a vCluster instance also deletes all objects within and all states related to the vCluster. If the namespace on the host cluster was created by the vCluster CLI, then that namespace is also deleted.
- CLI
- Helm
- kubectl
- Terraform
- Argo CD
- Cluster API
To delete a vCluster using the CLI:
vcluster delete my-vcluster --namespace team-x
To delete the vCluster deployed with Helm:
helm delete my-vcluster -n team-x --repository-config=''
The easiest option to delete a virtual cluster using kubectl
is to delete the host namespace:
kubectl delete namespace team-x
In case you have multiple vClusters or any other resources in this namespace, you can also just delete the vCluster-related resources:
kubectl delete -n team-x serviceaccount vcluster-1
kubectl delete -n team-x role vcluster-1
kubectl delete -n team-x rolebinding vcluster-1
kubectl delete -n team-x service vcluster-1
kubectl delete -n team-x service vcluster-1-headless
kubectl delete -n team-x statefulset vcluster-1
To delete a vCluster managed by Terraform:
- Remove the vCluster configuration from your Terraform files.
- Apply the changes:
terraform apply
- Alternatively, if you want to remove all resources managed by the Terraform configuration in the current workspace:
terraform destroy
To delete a vCluster deployed with Argo CD:
kubectl delete application my-vcluster -n argocd
Argo CD will automatically uninstall the vCluster when the application is deleted and synced again.
To delete a vCluster managed by the Cluster API:
kubectl delete vcluster my-vcluster -n team-x