Wednesday, June 19, 2019

MYSQL persistente con NFS

1) Lo primero, tratamos de crear un yaml de mentira,

kubectl run --image=mysql --dry-run -o yaml > mysql.yml

total, el que vamos a pegar, ya tiene todo resuelto. Nos fatla ver las diferencias. Para esto, como queremos que sea persistente, lo vamos a hacer via NFS, corriendo en el master.
Lo que necesitamos es:

A) Crear el PVC
B) Crear EL PV
C) Crear el MYSQL

vamos por el A

apiVersion: v1
kind: PersistentVolume
metadata:
  name: lab-vol
spec:
  capacity:
    storage: 10Gi
  volumeMode: Filesystem
  accessModes:
    - ReadWriteMany
  persistentVolumeReclaimPolicy: Recycle
  nfs:
    path: /var/nfs/general
    server: qqmelo1c.mylabserver.com
    readOnly: false


Defini un nfs server. Recordar que es importantisimo que tengamos andando el nfs-common en ambos workers.

Ahora el PVC.

apiVersion: v1
kind: PersistentVolumeClaim
metadata:
  name: nfs-pvc
spec:
  accessModes:
  - ReadWriteMany
  resources:
    requests:
      storage: 1Gi


y ahora el servicio persistente de mysql


aca defino un servicio, que luego lo cambiare, con su respectivo almacenamiento persistente

kind: Service
metadata:
  name: mysql
spec:
  ports:
  - port: 3306
  selector:
    app: mysql
  clusterIP: None
---
apiVersion: apps/v1 # for versions before 1.9.0 use apps/v1beta2
kind: Deployment
metadata:
  name: mysql
spec:
  selector:
    matchLabels:
      app: mysql
  strategy:
    type: Recreate
  template:
    metadata:
      labels:
        app: mysql
    spec:
      containers:
      - image: mysql:5.6
        name: mysql
        env:
          # Use secret in real usage
        - name: MYSQL_ROOT_PASSWORD
          value: password
        ports:
        - containerPort: 3306
          name: mysql
        volumeMounts:
        - name: mysql-persistent-storage
          mountPath: /var/lib/mysql
      volumes:
      - name: mysql-persistent-storage
        persistentVolumeClaim:
          claimName: nfs-pvc

ACL Haproxy

root@qqmelo1c:~/postgres# cat /etc/haproxy/haproxy.cfg
global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # Default ciphers to use on SSL-enabled listening sockets.
        # For more information, see ciphers(1SSL). This list is from:
        #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
        # An alternative list with additional directives can be obtained from
        #  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
        ssl-default-bind-options no-sslv3

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http


listen 3.16.21.136
    bind *:80
    mode http
    stats enable
    stats auth  cda:cda
    balance roundrobin
    option forwardfor
    #acl is_for_backend2 path_reg ^$|^/$|^/nodos|^/bpages
    acl is_for_backend2 path_beg -i /nodos
    use_backend backend2 if is_for_backend2

    default_backend backend1

    backend backend1
    server worker2 172.31.100.16:31759  check  #IP Privada frontal-01
    server worker1 172.31.104.150:31759  check #IP Privada frontal-01

    backend backend2
    server worker2 172.31.100.16:30737  check  #IP Privada frontal-01
    server worker1 172.31.104.150:30737  check #IP Privada frontal-01

Thursday, June 13, 2019

HAPROXY - K8s

En el nodo master, instale el haproxy.

Dejo el fichero de configuracion


global
        log /dev/log    local0
        log /dev/log    local1 notice
        chroot /var/lib/haproxy
        stats socket /run/haproxy/admin.sock mode 660 level admin expose-fd listeners
        stats timeout 30s
        user haproxy
        group haproxy
        daemon

        # Default SSL material locations
        ca-base /etc/ssl/certs
        crt-base /etc/ssl/private

        # Default ciphers to use on SSL-enabled listening sockets.
        # For more information, see ciphers(1SSL). This list is from:
        #  https://hynek.me/articles/hardening-your-web-servers-ssl-ciphers/
        # An alternative list with additional directives can be obtained from
        #  https://mozilla.github.io/server-side-tls/ssl-config-generator/?server=haproxy
        ssl-default-bind-ciphers ECDH+AESGCM:DH+AESGCM:ECDH+AES256:DH+AES256:ECDH+AES128:DH+AES:RSA+AESGCM:RSA+AES:!aNULL:!MD5:!DSS
        ssl-default-bind-options no-sslv3

defaults
        log     global
        mode    http
        option  httplog
        option  dontlognull
        timeout connect 5000
        timeout client  50000
        timeout server  50000
        errorfile 400 /etc/haproxy/errors/400.http
        errorfile 403 /etc/haproxy/errors/403.http
        errorfile 408 /etc/haproxy/errors/408.http
        errorfile 500 /etc/haproxy/errors/500.http
        errorfile 502 /etc/haproxy/errors/502.http
        errorfile 503 /etc/haproxy/errors/503.http
        errorfile 504 /etc/haproxy/errors/504.http


listen 192.168.0.140
    bind *:80
    mode http
    stats enable
    stats auth  cda:cda
    balance roundrobin
    option forwardfor
    server worker1 192.168.0.141:30224  check  #IP Privada frontal-01
    server worker2 192.168.0.142:30224  check #IP Privada frontal-01





es algo asi como 




Sunday, June 2, 2019

Armar un cluster Kubernetes

Let's begin our journey of learning Kubernetes by setting up a practice cluster. This will allow you to get hands-on with Kubernetes as quickly as possible, so that as you learn about various Kubernetes concepts you will be able to work with them in a real cluster if you choose. In this lesson, I will guide you through the process of setting up a simple Kubernetes cluster using Linux Academy's cloud playground. After completing this lesson, you will have a simple cluster that you can work with, and you will be familiar with the process of standing up a cluster on Ubuntu servers.
Here are the commands used in this lesson. Feel free to use them as a reference, or just use them to follow along!
Add the Docker Repository on all three servers.
curl -fsSL https://download.docker.com/linux/ubuntu/gpg | sudo apt-key add -
sudo add-apt-repository    "deb [arch=amd64] https://download.docker.com/linux/ubuntu \
   $(lsb_release -cs) \
   stable"
Add the Kubernetes repository on all three servers.
curl -s https://packages.cloud.google.com/apt/doc/apt-key.gpg | sudo apt-key add -
cat << EOF | sudo tee /etc/apt/sources.list.d/kubernetes.list
deb https://apt.kubernetes.io/ kubernetes-xenial main
EOF
Install Docker, Kubeadm, Kubelet, and Kubectl on all three servers.
NOTE: There are some issues being reported when installing version 1.12.2-00 from the Kubernetes ubuntu repositories. You can work around this by using version 1.12.7-00 for kubelet, kubeadm, and kubectl.
sudo apt-get update
sudo apt-get install -y docker-ce=18.06.1~ce~3-0~ubuntu kubelet=1.12.7-00 kubeadm=1.12.7-00 kubectl=1.12.7-00
sudo apt-mark hold docker-ce kubelet kubeadm kubectl
Enable net.bridge.bridge-nf-call-iptables on all three nodes.
echo "net.bridge.bridge-nf-call-iptables=1" | sudo tee -a /etc/sysctl.conf
sudo sysctl -p
On only the Kube Master server, initialize the cluster and configure kubectl.
sudo kubeadm init --pod-network-cidr=10.244.0.0/16
mkdir -p $HOME/.kube
sudo cp -i /etc/kubernetes/admin.conf $HOME/.kube/config
sudo chown $(id -u):$(id -g) $HOME/.kube/config
Install the flannel networking plugin in the cluster by running this command on the Kube Master server.
kubectl apply -f https://raw.githubusercontent.com/coreos/flannel/bc79dd1505b0c8681ece4de4c0d86c5cd2643275/Documentation/kube-flannel.yml
The kubeadm init command that you ran on the master should output a kubeadm join command containing a token and hash. You will need to copy that command from the master and run it on both worker nodes with sudo.
sudo kubeadm join $controller_private_ip:6443 --token $token --discovery-token-ca-cert-hash $hash
Now you are ready to verify that the cluster is up and running. On the Kube Master server, check the list of nodes.
kubectl get nodes
It should look something like this:
NAME                      STATUS   ROLES    AGE   VERSION
wboyd1c.mylabserver.com   Ready    master   54m   v1.12.2
wboyd2c.mylabserver.com   Ready       49m   v1.12.2
wboyd3c.mylabserver.com   Ready       49m   v1.12.2
Make sure that all three of your nodes are listed and that all have a STATUS of Ready.

Wednesday, April 3, 2019

GLPI en 2 minutos

Y bueno, si querian algo sencillo, ahi va.

primero armamos el projecto glpi

oc new-project glpi

y luego.

oc new-app docker.io/diouxx/glpi

(Tome esta por que parece ser la oficial )

luego, las rutinas de siempre

[root@qqmelo1c ~]# oc get pods
NAME            READY     STATUS              RESTARTS   AGE
glpi-1-deploy   1/1       Running             0          13s
glpi-1-kcbkm    0/1       ContainerCreating   0          10s
[root@qqmelo1c ~]# oc describe pod glpi-1-kcbkm
Name:               glpi-1-kcbkm
Namespace:          glpi
Priority:           0
PriorityClassName: 
Node:               qqmelo1c.mylabserver.com/172.31.38.153
Start Time:         Thu, 04 Apr 2019 02:47:47 +0000
Labels:             app=glpi
                    deployment=glpi-1
                    deploymentconfig=glpi
Annotations:        openshift.io/deployment-config.latest-version=1
                    openshift.io/deployment-config.name=glpi
                    openshift.io/deployment.name=glpi-1
                    openshift.io/generated-by=OpenShiftNewApp
                    openshift.io/scc=anyuid
Status:             Running
IP:                 10.128.0.136
.....

y luego, para ver que todo anduvo joya,

lynx 10.128.0.136.

y si aun andamos aburridos, tiramos algo de magia con

[root@qqmelo1c ~]# oc get pods glpi-1-kcbkm -o yaml > glpi.yml
[root@qqmelo1c ~]# cat glpi.yml
apiVersion: v1
kind: Pod
metadata:
  annotations:
    openshift.io/deployment-config.latest-version: "1"
    openshift.io/deployment-config.name: glpi
    openshift.io/deployment.name: glpi-1
    openshift.io/generated-by: OpenShiftNewApp
    openshift.io/scc: anyuid
  creationTimestamp: 2019-04-04T02:47:47Z
  generateName: glpi-1-
  labels:
    app: glpi
    deployment: glpi-1
    deploymentconfig: glpi
......


Sunday, March 31, 2019

Devstack en Ubuntu 16.04

Fijarse bien el /etc/hosts

192.168.0.150 ubuntu.balinux.com.ar ubuntu


cambiar el grub.

la linea

GRUB_CMDLINE_LINUX="net.ifnames=0 biosdevname=0"

debe quedar asi

grub-mkconfig -o /boot/grub/grub.cfg

y luego en las interfaces


--------------

el setup.sh

#! /bin/sh

export LANG=en_US.utf-8
export LC_ALL=en_US.utf-8

sudo apt-get install -y git

git clone https://git.openstack.org/openstack-dev/devstack

cp /devstack/local.conf devstack

cd devstack
./stack.sh








[[local|localrc]]

HOST_IP=192.168.0.150
FLOATING_RANGE=192.168.0.224/27
FIXED_RANGE=172.17.24.0/24
FIXED_NETWORK_SIZE=256
# Set basic passwords
DATABASE_PASSWORD=openstack
ADMIN_PASSWORD=openstack
SERVICE_PASSWORD=openstack
RABBIT_PASSWORD=openstack

Monday, March 18, 2019

PVS en openshift

Para el almacenamiento en openshift, debemos ver que existen dos cosas:

1) el recurso PV
2) el Pod que hace un claim al PV.

vamos por el primer recurso, el PV. Para esto lo montaremos en un nfs.


$ ssh root@$SERVER-IP
  1. Create a directory named /var/tmp/share where only the user and group nfsnobody have any permissions.
# mkdir -p /var/tmp/share
# chown nfsnobody:nfsnobody /var/tmp/share
# chmod 700 /var/tmp/share
  1. Under /etc/, create an exports file setting the location of your NFS directory.
Create a new file under /etc/exports.d/$NAME with the following contents:
/var/tmp/share *(rw,async,all_squash)
  1. Export the folder so that the OpenShift node can access the storage. If using SELinux, make sure that SELinux permits usage of NFS on both the NFS server and OpenShift node1.
# exportfs -a
If using SELinux:
# setsebool virt_use_nfs 1
  1. Verify that node1 has access to the NFS by manually mounting the newly created folder on the /mnt directory.
$ ssh root@$NODE-1-IP
# mount -t nfs $NFS-SERVER-IP:/var/tmp/share /mnt
y si todo esto esta bien, generamos el yaml del pv.

Creamos un archivo, por ejemplo, pv-nfs.yaml

apiVersion: v1
kind: PersistentVolume
metadata:
  name: ex280-data
spec:
  capacity:
    storage: 10Gi
  accessModes:
  - ReadWriteOnce
  nfs:
    path: /var/tmp/share
    server: $NFS-LOCATION
  persistentVolumeReclaimPolicy: Retain
y luego corremos

oc create -f pv-nfs.yaml

para ver si esta todo ok..

[root@qqmelo1c ~]# oc get pv
NAME         CAPACITY   ACCESS MODES   RECLAIM POLICY   STATUS      CLAIM     STORAGECLASS   REASON    AGE
ex280-data   10Gi       RWO            Retain           Available                                      11m
[root@qqmelo1c ~]#  oc describe pv ex280-data
Name:            ex280-data
Labels:         
Annotations:     
Finalizers:      [kubernetes.io/pv-protection]
StorageClass:
Status:          Available
Claim:
Reclaim Policy:  Retain
Access Modes:    RWO
Capacity:        10Gi
Node Affinity:   
Message:
Source:
    Type:      NFS (an NFS mount that lasts the lifetime of a pod)
    Server:    qqmelo3c.mylabserver.com
    Path:      /var/tmp/share
    ReadOnly:  false
Events:       
[root@qqmelo1c ~]#


como vemos, esta todo ok, esperando el pod que reclame a este PV!

Secrets y mucho más

Quería hacer un secret para un pod de mysql-ephemeral.

Lo primero, era que no encontraba la definicion en formato yaml de mysql. Pero si estan en fomato json, encontre un script que lo pasa.


 for i in $(find . -name '*.json');do  python -c 'import sys,json,yaml;print(yaml.safe_dump(json.loads(sys.stdin.read()), default_flow_style=False))' < $i > ${i/json/yaml};done


Esto me dio una mano gigante!. Con esto corriendo, puedo pasar los json hacia yaml!.

El paso siguiente, es que en la definicion original, vemos que esta definido algo como secret. Mas precisamente en....


32 kind: Secret
33 metadata:
34 annotations:
35 template.openshift.io/expose-database_name: '{.data[''database-name'']}'
36 template.openshift.io/expose-password: '{.data[''database-password'']}'
37 template.openshift.io/expose-root_password: '{.data[''database-root-password'']}'
38 template.openshift.io/expose-username: '{.data[''database-user'']}'
39 name: ${DATABASE_SERVICE_NAME}
40 stringData:
41 database-name: ${MYSQL_DATABASE}
42 database-password: ${MYSQL_PASSWORD}
43 database-root-password: ${MYSQL_ROOT_PASSWORD}
44 database-user: ${MYSQL_USER}



Entonces, los borramos, por que sino cuando hago la importación me da un error.

Antes de la importación , debo crear mi secret,que el nombre será mysql.


oc create secret generic mysql --from-literal='database-user'='mysql' --from-literal='database-password'='centos' --from-literal='database-root-password'='centos' --from-literal='database-name'='prueba'



Y luego si podemos correr el comando, una vez que tenemos el secret.

oc new-app -f mysql-ephemeral.yaml


Luego vemos algo asi ( De paso lo escalé)

[root@qqmelo1c ~]# oc get pods -o=wide
NAME            READY     STATUS    RESTARTS   AGE       IP             NODE                       NOMINATED NODE
mysql-1-4x2rb   1/1       Running   0          3m        10.128.0.164   qqmelo1c.mylabserver.com   
mysql-1-747nw   1/1       Running   0          3m        10.129.0.51    qqmelo2c.mylabserver.com   
mysql-1-cc452   1/1       Running   0          4m        10.129.0.49    qqmelo2c.mylabserver.com   
mysql-1-mj9wt   1/1       Running   0          3m        10.129.0.52    qqmelo2c.mylabserver.com   
mysql-1-wf4hn   1/1       Running   0          4m        10.129.0.50    qqmelo2c.mylabserver.com   
mysql-1-zgjkn   1/1       Running   0          3m        10.128.0.165   qqmelo1c.mylabserver.com   
[root@qqmelo1c ~]# mysql -u mysql -pcentos -h 10.129.0.51
Welcome to the MariaDB monitor.  Commands end with ; or \g.
Your MySQL connection id is 43
Server version: 5.7.24 MySQL Community Server (GPL)

Copyright (c) 2000, 2018, Oracle, MariaDB Corporation Ab and others.

Type 'help;' or '\h' for help. Type '\c' to clear the current input statement.

MySQL [(none)]> show databases;
+--------------------+
| Database           |
+--------------------+
| information_schema |
| prueba             |
+--------------------+
2 rows in set (0.00 sec)

MySQL [(none)]>


Debugs... Lo explico despues!

 MYSQL_USER:                      Optional: false
      MYSQL_PASSWORD:              Optional: false
      MYSQL_ROOT_PASSWORD:    Optional: false
      MYSQL_DATABASE:                  Optional: false
    Mounts:
      /var/lib/mysql/data from mysql-data (rw)
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-75wrh (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  mysql-data:
    Type:    EmptyDir (a temporary directory that shares a pod's lifetime)
    Medium:
  default-token-75wrh:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-75wrh
    Optional:    false
QoS Class:       Burstable
Node-Selectors:  node-role.kubernetes.io/compute=true
Tolerations:     node.kubernetes.io/memory-pressure:NoSchedule
Events:
  Type     Reason          Age               From                               Message
  ----     ------          ----              ----                               -------
  Normal   Scheduled       33s               default-scheduler                  Successfully assigned qq/mysql-1-j4k96 to qqmelo1c.mylabserver.com
  Warning  Failed          6s (x8 over 30s)  kubelet, qqmelo1c.mylabserver.com  Error: Couldn't find key database-name in Secret qq/mysql
  Normal   SandboxChanged  5s (x8 over 29s)  kubelet, qqmelo1c.mylabserver.com  Pod sandbox changed, it will be killed and re-created.
  Normal   Pulled          2s (x9 over 30s)  kubelet, qqmelo1c.mylabserver.com  Container image "docker-registry.default.svc:5000/openshift/mysql@sha256:c825717ffe30b7816006d95b8bb0bab7f462bdf9658a0092fc0149339037b0ee" already present on machine

Postgres

https://portworx.com/run-ha-postgresql-red-hat-openshift/

scaling pods

oc scale --replicas 4 dc/ruby-ex

Saturday, March 16, 2019

Instalando MariaDB

Tal como lo vimos en mysql, si mandamos mariadb sin ningun parametro, nos dara error.

Por eso, lo mejor, es armar un projecto, e ir debuggueando.

despues , ya llegamos a lo que hay que ejecutar es lo siguiente.


oc new-app -e  MYSQL_USER=marcelo -e MYSQL_PASSWORD=marcelo -e MYSQL_DATABASE=prueba -e  MYSQL_ROOT_PASSWORD=marcelo mariadb

y voila


solo nos queda conectarnos a nuestra flamante db!

Friday, March 15, 2019

NGINX

  225  oc new-project qqmelo
  226  oc new-app twalter/openshift-nginx:stable --name nginx-stable
  227  oc status
  228  oc statusw
  229  oc status
  230  oc get pods
  231  oc describe pod nginx-stable-1-c98h4
  232  curl 10.128.0.105
  233  curl 10.128.0.105:8080
  234  curl 10.128.0.105:8081


la pagina de donde tome todo es 

https://torstenwalter.de/openshift/nginx/2017/08/04/nginx-on-openshift.html

Thursday, March 14, 2019

Para ver mañana de MYSQL

http://www.mastertheboss.com/soa-cloud/openshift/accessing-openshift-services-remotely

Forzando un error , y viendo los logs

[root@qqmelo1c ansible]# oc get pods
NAME            READY     STATUS             RESTARTS   AGE
mysql-1-pg6dd   0/1       CrashLoopBackOff   3          1m
[root@qqmelo1c ansible]# oc describe mysql-1-pg6dd
error: the server doesn't have a resource type "mysql-1-pg6dd"
[root@qqmelo1c ansible]# oc log mysql-1-pg6dd
log is DEPRECATED and will be removed in a future version. Use logs instead.
=> sourcing 20-validate-variables.sh ...
You must either specify the following environment variables:
  MYSQL_USER (regex: '^[a-zA-Z0-9_]+$')
  MYSQL_PASSWORD (regex: '^[a-zA-Z0-9_~!@#$%^&*()-=<>,.?;:|]+$')
  MYSQL_DATABASE (regex: '^[a-zA-Z0-9_]+$')
Or the following environment variable:
  MYSQL_ROOT_PASSWORD (regex: '^[a-zA-Z0-9_~!@#$%^&*()-=<>,.?;:|]+$')
Or both.
Optional Settings:
  MYSQL_LOWER_CASE_TABLE_NAMES (default: 0)
  MYSQL_LOG_QUERIES_ENABLED (default: 0)
  MYSQL_MAX_CONNECTIONS (default: 151)
  MYSQL_FT_MIN_WORD_LEN (default: 4)
  MYSQL_FT_MAX_WORD_LEN (default: 20)
  MYSQL_AIO (default: 1)
  MYSQL_KEY_BUFFER_SIZE (default: 32M or 10% of available memory)
  MYSQL_MAX_ALLOWED_PACKET (default: 200M)
  MYSQL_TABLE_OPEN_CACHE (default: 400)
  MYSQL_SORT_BUFFER_SIZE (default: 256K)
  MYSQL_READ_BUFFER_SIZE (default: 8M or 5% of available memory)
  MYSQL_INNODB_BUFFER_POOL_SIZE (default: 32M or 50% of available memory)
  MYSQL_INNODB_LOG_FILE_SIZE (default: 8M or 15% of available memory)
  MYSQL_INNODB_LOG_BUFFER_SIZE (default: 8M or 15% of available memory)

For more information, see https://github.com/sclorg/mysql-container
[root@qqmelo1c ansible]# oc describe pod mysql-1-pg6dd
Name:               mysql-1-pg6dd
Namespace:          qq
Priority:           0
PriorityClassName: 
Node:               qqmelo1c.mylabserver.com/172.31.115.132
Start Time:         Fri, 15 Mar 2019 03:53:06 +0000
Labels:             app=mysql
                    deployment=mysql-1
                    deploymentconfig=mysql
Annotations:        openshift.io/deployment-config.latest-version=1
                    openshift.io/deployment-config.name=mysql
                    openshift.io/deployment.name=mysql-1
                    openshift.io/generated-by=OpenShiftNewApp
                    openshift.io/scc=restricted
Status:             Running
IP:                 10.128.0.32
Controlled By:      ReplicationController/mysql-1
Containers:
  mysql:
    Container ID:   docker://d47e3145a0b82e7daf873316d513b66ca487615f761ef253d492de65e6dee73b
    Image:          docker-registry.default.svc:5000/openshift/mysql@sha256:c825717ffe30b7816006d95b8bb0bab7f462bdf9658a0092fc0149339037b0ee
    Image ID:       docker-pullable://docker-registry.default.svc:5000/openshift/mysql@sha256:c825717ffe30b7816006d95b8bb0bab7f462bdf9658a0092fc0149339037b0ee
    Port:           3306/TCP
    Host Port:      0/TCP
    State:          Waiting
      Reason:       CrashLoopBackOff
    Last State:     Terminated
      Reason:       Error
      Exit Code:    1
      Started:      Fri, 15 Mar 2019 03:53:54 +0000
      Finished:     Fri, 15 Mar 2019 03:53:54 +0000
    Ready:          False
    Restart Count:  3
    Environment:   
    Mounts:
      /var/run/secrets/kubernetes.io/serviceaccount from default-token-kn55l (ro)
Conditions:
  Type              Status
  Initialized       True
  Ready             False
  ContainersReady   False
  PodScheduled      True
Volumes:
  default-token-kn55l:
    Type:        Secret (a volume populated by a Secret)
    SecretName:  default-token-kn55l
    Optional:    false
QoS Class:       BestEffort
Node-Selectors:  node-role.kubernetes.io/compute=true
Tolerations:     
Events:
  Type     Reason     Age               From                               Message
  ----     ------     ----              ----                               -------
  Normal   Scheduled  1m                default-scheduler                  Successfully assigned qq/mysql-1-pg6dd to qqmelo1c.mylabserver.com
  Normal   Pulled     46s (x4 over 1m)  kubelet, qqmelo1c.mylabserver.com  Container image "docker-registry.default.svc:5000/openshift/mysql@sha256:c825717ffe30b7816006d95b8bb0bab7f462bdf9658a0092fc0149339037b0ee" already present on machine
  Normal   Created    46s (x4 over 1m)  kubelet, qqmelo1c.mylabserver.com  Created container
  Normal   Started    46s (x4 over 1m)  kubelet, qqmelo1c.mylabserver.com  Started container
  Warning  BackOff    7s (x8 over 1m)   kubelet, qqmelo1c.mylabserver.com  Back-off restarting failed container



Pero el dato mas claro, lo tira el oc log

ya que no tenemos pasadas las secrets....

OKD 3.11 ALL IN ONE (un nodo )

1)

verificar que esté correcto el /etc/hostname

[root@qqmelo1c cloud_user]# vim /etc/hosts
127.0.0.1   localhost localhost.localdomain localhost4 localhost4.localdomain4
::1         localhost localhost.localdomain localhost6 localhost6.localdomain6
# Cloud Server Hostname mapping
172.31.115.132   qqmelo1c.mylabserver.com qqmelo1c master



2) generar las claves ssh key

[root@qqmelo1c cloud_user]# ssh-keygen -t rsa
Generating public/private rsa key pair.
Enter file in which to save the key (/root/.ssh/id_rsa):
Enter passphrase (empty for no passphrase):
Enter same passphrase again:
Your identification has been saved in /root/.ssh/id_rsa.
Your public key has been saved in /root/.ssh/id_rsa.pub.
The key fingerprint is:
SHA256:MH3+KnjMDzjolWsaJ60wtHltYlMedGpMtBn7lkqIcz0 root@qqmelo1c.mylabserver.com
The key's randomart image is:
+---[RSA 2048]----+
|      o          |
|     . *         |
|      O o .      |
|   . * * +       |
|  + o E S .      |
| . = O B   .     |
|  = O %+.   .    |
|   * Xoo=. .     |
|    +o.. oo      |
+----[SHA256]-----+

ahora vamos a hacer el copy para que ansible pueda correr en el nodo

[root@qqmelo1c cloud_user]# ssh-copy-id master
/bin/ssh-copy-id: INFO: Source of key(s) to be installed: "/root/.ssh/id_rsa.pub"
/bin/ssh-copy-id: INFO: attempting to log in with the new key(s), to filter out any that are already installed
/bin/ssh-copy-id: INFO: 1 key(s) remain to be installed -- if you are prompted now it is to install the new keys
root@master's password:

Number of key(s) added: 1

Now try logging into the machine, with:   "ssh 'master'"
and check to make sure that only the key(s) you wanted were added.


una vez que está esto, tenemos que instalar los packages.

yum -y install centos-release-openshift-origin311 epel-release docker git pyOpenSS
systemctl start docker
systemctl enable docker
yum -y install openshift-ansible

Y lueego de esto, lo que falta, es tocar el archivo /etc/ansible/hosts

[OSEv3:children]
masters
nodes
etcd

[OSEv3:vars]
ansible_ssh_user=root
ansible_become=true
openshift_deployment_type=origin
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
openshift_master_htpasswd_users={'admin':'$apr1$qEufOSTX$R1ObTWs7YVwmKSjmaWzCa0', 'developer': '$apr1$ojtj8hHs$C9UZtdLoDs2Hcdw2msKOm0'}
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
openshift_master_default_subdomain=apps.mylabserver.com
openshift_docker_insecure_registries=172.30.0.0/16
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability

[masters]
qqmelo1c.mylabserver.com openshift_schedulable=true containerized=false

[etcd]
qqmelo1c.mylabserver.com

[nodes]
qqmelo1c.mylabserver.com openshift_node_group_name='node-config-all-in-one'


Luego, corremos el playbook de los prerequisitos.

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml

Y luego el playbook de la instalación

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml

Rescatando eventos en openshift

Si tenemos que ver logs, deberíamos hacer lo siguiente...


[root@qqmelo1c net.d]# oc get events
LAST SEEN   FIRST SEEN   COUNT     NAME                                        KIND                   SUBOBJECT                           TYPE      REASON                    SOURCE                                    MESSAGE
6h          10h          24        template-service-broker.158bdce32cfd10dd    ClusterServiceBroker                                       Normal    FetchedCatalog            service-catalog-controller-manager        Successfully fetched catalog entries from broker.
6h          10h          127       ansible-service-broker.158bdce0550e223a     ClusterServiceBroker                                       Warning   ErrorFetchingCatalog      service-catalog-controller-manager        Error getting broker catalog: Status: 404; ErrorMessage: ; Description: ; ResponseError:
57m         57m          1         qqmelo1c.mylabserver.com.158bfb583b759511   Node                                                       Normal    NodeHasSufficientPID      kubelet, qqmelo1c.mylabserver.com         Node qqmelo1c.mylabserver.com status is now: NodeHasSufficientPID
57m         57m          1         qqmelo1c.mylabserver.com.158bfb582e98163a   Node                                                       Normal    Starting                  kubelet, qqmelo1c.mylabserver.com         Starting kubelet.
57m         57m          2         qqmelo1c.mylabserver.com.158bfb583b756e50   Node                                                       Normal    NodeHasNoDiskPressure     kubelet, qqmelo1c.mylabserver.com         Node qqmelo1c.mylabserver.com status is now: NodeHasNoDiskPressure
57m         57m          2         qqmelo1c.mylabserver.com.158bfb583b74e6f2   Node                                                       Normal    NodeHasSufficientDisk     kubelet, qqmelo1c.mylabserver.com         Node qqmelo1c.mylabserver.com status is now: NodeHasSufficientDisk
57m         57m          2         qqmelo1c.mylabserver.com.158bfb583b754883   Node     



Y si queremos ver un log de un pod en particular, tiramos el comando siguiente

 oc logs router-1-dmbx

       

Wednesday, March 13, 2019

Manejando Bindings


creamos un usuario.

httpasswd -b /etc/origin/master/httpasswd mguazzardo espejito

a este usuario, le queremos dar permisos de

admin

y de cluster admin

oc adm policy add-role-to-user admin mguazzardo
oc adm policy add-cluster-role-to-user cluster-admin mguazzardo

Como vimos, anteriormente la diferencia entre admin, y cluster admin, puede ser tomada dde el lado de la infraestructura
el cluster admin por ejemplo puede tirar un oc get nodes

OKD 3.11


Instalando OKD 3.11


En dos nodos.


EL master será

qqmelo1c.mylabserver.com

El nodo será

qqmelo2c.mylabserver.com

Tendremos que hacer el tema del ssh-keygen, y el tema de los copy-id.

Luego, que este todo ok, y que estén los hosts correspondientes, se puede ver de correr lo que falta...

yum -y install centos-release-openshift-origin311 epel-release docker git pyOpenSS
systemctl start docker
systemctl enable docker
yum -y install openshift-ansible


Luego de esto, copiar esto en /etc/ansible/hosts


[OSEv3:children]
masters
nodes
etcd
[OSEv3:vars]
# admin user created in previous section
ansible_ssh_user=root
ansible_become=true
openshift_deployment_type=origin
# use HTPasswd for authentication
openshift_master_identity_providers=[{'name': 'htpasswd_auth', 'login': 'true', 'challenge': 'true', 'kind': 'HTPasswdPasswordIdentityProvider'}]
# CHANGE THIS LINE / openssl passwd -apr1 redhat
openshift_master_htpasswd_users={'admin':'$apr1$qEufOSTX$R1ObTWs7YVwmKSjmaWzCa0', 'developer': '$apr1$ojtj8hHs$C9UZtdLoDs2Hcdw2msKOm0'}
os_sdn_network_plugin_name='redhat/openshift-ovs-multitenant'
# define default sub-domain for Master node
openshift_master_default_subdomain=apps.mylabserver.com
# allow unencrypted connection within cluster
openshift_docker_insecure_registries=172.30.0.0/16
openshift_disable_check=disk_availability,docker_storage,memory_availability,docker_image_availability
[masters]
qqmelo1c.mylabserver.com openshift_schedulable=true containerized=false
[etcd]
qqmelo1c.mylabserver.com
[nodes]
# defined values for [openshift_node_group_name] in the file below
# [/usr/share/ansible/openshift-ansible/roles/openshift_facts/defaults/main.yml]
qqmelo1c.mylabserver.com openshift_node_group_name='node-config-all-in-one'
qqmelo2c.mylabserver.com openshift_node_group_name='node-config-compute

Luego, corremos el playbook de los prerequisitos.

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/prerequisites.yml

Y luego el playbook de la instalación

ansible-playbook /usr/share/ansible/openshift-ansible/playbooks/deploy_cluster.yml


Solucionando el error de nodo no listo

[root@qqmelo1c ~]# oc get nodes
NAME STATUS ROLES AGE VERSION
qqmelo1c.mylabserver.com Ready compute,infra,master 16d v1.11.0+d4cacc0
qqmelo2c.mylabserver.com NotReady compute 16d v1.11.0+d4cacc0

para ver un poco mejor...
oc describe node/qqmelo2c.mylabserver.com
encuentro

KubeletNotReady runtime network not ready:

lo solucione copiando....



[root@qqmelo2c ~]# cd /etc/cni/net.d/

[root@qqmelo2c net.d]# scp master:$PWD/* .

[root@qqmelo1c net.d]# ls
80-openshift-network.conf
[root@qqmelo1c net.d]# cat 80-openshift-network.conf

{
  "cniVersion": "0.2.0",
  "name": "openshift-sdn",
  "type": "openshift-sdn"
}

[root@qqmelo2c net.d]# systemctl restart origin-node

y con esto se soluciono

Tuesday, March 12, 2019

Manejando Métricas en Openshift

# ansible-playbook \
/usr/share/ansible/openshift-ansible/playbooks/openshift-metrics/config.yml
Este playbook instala el sistema de Métricas de Openshift.

Luego de haber corrido este playbook, se deberian ver los siguientes pods corriedo

oc get pods
hawkular-cassandra
hawkular-metrics
heapster
Luego también tenemos un administrador de metricas vía web.

Controladores de la replicación


Cuando necesitamos hacer escalamiento horizontal, ya sea por que un POD se nos está quedando sin recurso, necesitamos usar la característica de auto escaling de Openshift.
Para esto, se manejan los Controladores de Replicación.

$ oc autoscale dc/frontend --min 1 --max 10 --cpu-percent=80
deploymentconfig "frontend" autoscaled
Acá tenemos un ejemplo de un autoescalamiento de POD.

Mirando un Horizontal Pod Autoscaler

To view the status of a horizontal pod autoscaler:
  • Use the oc get command to view information on the CPU utilization and pod limits:
    $ oc get hpa/hpa-resource-metrics-cpu
    NAME                         REFERENCE                                 TARGET    CURRENT  MINPODS        MAXPODS    AGE
    hpa-resource-metrics-cpu     DeploymentConfig/default/frontend/scale   80%       79%      1              10         8d
    The output includes the following:
    • Target. The targeted average CPU utilization across all pods controlled by the deployment configuration.
    • Current. The current CPU utilization across all pods controlled by the deployment configuration.
    • Minpods/Maxpods. The minimum and maximum number of replicas that can be set by the autoscaler.
  • Use the oc describe command for detailed information on the horizontal pod autoscaler object.
    $ oc describe hpa/hpa-resource-metrics-cpu
    Name:                           hpa-resource-metrics-cpu
    Namespace:                      default
    Labels:                         
    CreationTimestamp:              Mon, 26 Oct 2015 21:13:47 -0400
    Reference:                      DeploymentConfig/default/frontend/scale
    Target CPU utilization:         80% 
    Current CPU utilization:        79% 
    Min replicas:                   1 
    Max replicas:                   4 
    ReplicationController pods:     1 current / 1 desired
    Conditions: 
      Type                  Status  Reason                  Message
      ----                  ------  ------                  -------
      AbleToScale           True    ReadyForNewScale        the last scale time was sufficiently old as to warrant a new scale
      ScalingActive         True    ValidMetricFound        the HPA was able to successfully calculate a replica count from pods metric http_requests
      ScalingLimited        False   DesiredWithinRange      the desired replica count is within the acceptable range
    Events:
    The average percentage of the requested memory that each pod should be using.
    The current CPU utilization across all pods controlled by the deployment configuration.
    The minimum number of replicas to scale down to.
    The maximum number of replicas to scale up to.
    If the object used the v2alpha1 API, status conditions are displayed.