Skip to content

Latest commit

 

History

History
219 lines (155 loc) · 8.07 KB

File metadata and controls

219 lines (155 loc) · 8.07 KB

Lab 1: Create AKS Cluster

Prerequisites

  1. Azure Account

Instructions

  1. Login to Azure Portal at http://portal.azure.com.

  2. Open the Azure Cloud Shell

    Azure Cloud Shell

  3. The first time Cloud Shell is started will require you to create a storage account.

  4. Once your cloud shell is started, clone the workshop repo into the cloud shell environment

    git clone https://github.com/Azure/kubernetes-hackfest
    
    cd kubernetes-hackfest/labs/create-aks-cluster

    Note: In the cloud shell, you are automatically logged into your Azure subscription.

  5. Create a unique identifier suffix for resources to be created in this lab.

    export UNIQUE_SUFFIX=$USER$RANDOM
    
    # Check the value
    echo $UNIQUE_SUFFIX
    

    Note this value and it will be used in the next couple labs. The variable may reset if your shell times out, so PLEASE WRITE IT DOWN.

  6. Create an Azure Resource Group in East US.

    export RGNAME=kubernetes-hackfest
    export LOCATION=eastus
    
    az group create -n $RGNAME -l $LOCATION 
  7. Deploy Log Analytics Workspace

    export LA_NAME=k8monitor
    
    # workspace Name must be unique
    export WORKSPACENAME=k8logs-$UNIQUE_SUFFIX
    
    az group deployment create -n $WORKSPACENAME -g $RGNAME --template-file azuredeploy-loganalytics.json \
    --parameters workspaceName=$WORKSPACENAME \
    --parameters location=$LOCATION \
    --parameters sku="Standalone"

    Get Workspace ID

    az group deployment list -g $RGNAME -o tsv  --query "[].id" | grep "k8logs"

    Export WorkspaceID based output above

    export WORKSPACEID=<value>
  8. Create your AKS cluster in the resource group created above with 3 nodes, targeting Kubernetes version 1.10.3, with Container Insights, and HTTP Application Routing Enabled.

    • Use unique CLUSTERNAME
    export CLUSTERNAME=aks-$UNIQUE_SUFFIX

    The below command can take 10-20 minutes to run as it is creating the AKS cluster. Please be PATIENT...

    az aks create -n $CLUSTERNAME -g $RGNAME -c 1 -k 1.10.3 \
    --generate-ssh-keys -l $LOCATION \
    --node-count 3 \
    --enable-addons http_application_routing,monitoring \
    --workspace-resource-id $WORKSPACEID
  9. Verify your cluster status. The ProvisioningState should be Succeeded

    az aks list -o table
    
    Name                 Location    ResourceGroup         KubernetesVersion    ProvisioningState    Fqdn
    -------------------  ----------  --------------------  -------------------  -------------------  -------------------------------------------------------------------
    ODLaks-v2-gbb-16502  eastus   ODL_aks-v2-gbb-16502  1.8.6                Succeeded odlaks-v2--odlaks-v2-gbb-16-b23acc-17863579.hcp.centralus.azmk8s.io
  10. Get the Kubernetes config files for your new AKS cluster

    az aks get-credentials -n $CLUSTERNAME -g $RGNAME
  11. Verify you have API access to your new AKS cluster

    Note: It can take 5 minutes for your nodes to appear and be in READY state. You can run watch kubectl get nodes to monitor status.

    kubectl get nodes
    
    NAME                       STATUS    ROLES     AGE       VERSION
    aks-nodepool1-26522970-0   Ready     agent     33m       v1.10.3

    To see more details about your cluster:

    kubectl cluster-info
    
    Kubernetes master is running at https://cluster-dw-kubernetes-hackf-80066e-a44f3eb0.hcp.eastus.azmk8s.io:443
    addon-http-application-routing-default-http-backend is running at https://cluster-dw-kubernetes-hackf-80066e-a44f3eb0.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/addon-http-application-routing-default-http-backend/proxy
    addon-http-application-routing-nginx-ingress is running at http://168.62.191.18:80 http://168.62.191.18:443
    Heapster is running at https://cluster-dw-kubernetes-hackf-80066e-a44f3eb0.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/heapster/proxy
    KubeDNS is running at https://cluster-dw-kubernetes-hackf-80066e-a44f3eb0.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/kube-dns:dns/proxy
    kubernetes-dashboard is running at https://cluster-dw-kubernetes-hackf-80066e-a44f3eb0.hcp.eastus.azmk8s.io:443/api/v1/namespaces/kube-system/services/kubernetes-dashboard/proxy 

    You should now have a Kubernetes cluster running with 3 nodes. You do not see the master servers for the cluster because these are managed by Microsoft. The Control Plane services which manage the Kubernetes cluster such as scheduling, API access, configuration data store and object controllers are all provided as services to the nodes.

Troubleshooting / Debugging

To further debug and diagnose cluster problems, use

kubectl cluster-info dump

Docs / References

Lab 2: Create AKS Cluster Namespaces

This lab creates namespaces that reflect a representative example of an organization's environments. In this case DEV, UAT and PROD. We will also apply the appopriate permissions, limits and resource quotas to each of the namespaces.

Prerequisites

  1. Build AKS Cluster (from above)

Instructions

  1. Create Three Namespaces

    # Create namespaces
    kubectl apply -f create-namespaces.yaml
    
    # Look at namespaces
    kubectl get ns
  2. Assign CPU, Memory and Storage Limits to Namespaces

    # Create namespace limits
    kubectl apply -f namespace-limitranges.yaml
    
    # Get list of namespaces and drill into one
    kubectl get ns
    kubectl describe ns uat
  3. Assign CPU, Memory and Storage Quotas to Namespaces

    # Create namespace quotas
    kubectl apply -f namespace-quotas.yaml
    
    # Get list of namespaces and drill into one
    kubectl get ns
    kubectl describe ns dev
  4. Test out Limits and Quotas in dev Namespace

    # Test Limits - Forbidden due to assignment of CPU too low
    kubectl run nginx-limittest --image=nginx --restart=Never --replicas=1 --port=80 --requests='cpu=100m,memory=256Mi' -n dev
    # Test Limits - Pass due to automatic assignment within limits via defaults
    kubectl run nginx-limittest --image=nginx --restart=Never --replicas=1 --port=80 -n dev
    # Check running pod and dev Namespace Allocations
    kubectl get po -n dev
    kubectl describe ns dev
    # Test Quotas - Forbidden due to memory quota exceeded
    kubectl run nginx-quotatest --image=nginx --restart=Never --replicas=1 --port=80 --requests='cpu=500m,memory=1Gi' -n dev
    # Test Quotas - Pass due to memory within quota
    kubectl run nginx-quotatest --image=nginx --restart=Never --replicas=1 --port=80 --requests='cpu=500m,memory=512Mi' -n dev
    # Check running pod and dev Namespace Allocations
    kubectl get po -n dev
    kubectl describe ns dev
  5. Clean up quotas

    kubectl delete -f namespace-limitranges.yaml
    kubectl delete -f namespace-quotas.yaml
    
    kubectl describe ns dev
    kubectl describe ns uat
    kubectl describe ns prod
    

Troubleshooting / Debugging

  • The limits and quotas of a namespace can be found via the kubectl describe ns <...> command. You will also be able to see current allocations.
  • If pods are not deploying then check to make sure that CPU, Memory and Storage amounts are within the limits and do not exceed the overall quota of the namespace.

Docs / References