Wednesday, March 7, 2018

Sample haproxy config that proxying a https backend and setup as active passive.


Sample haproxy config that proxying a https backend and setup as active passive.



global
   # to have these messages end up in /var/log/haproxy.log you will
   # need to:
   #
   # 1) configure syslog to accept network log events.  This is done
   #    by adding the '-r' option to the SYSLOGD_OPTIONS in
   #    /etc/sysconfig/syslog
   #
   # 2) configure local2 events to go to the /var/log/haproxy.log
   #   file. A line like the following can be added to
   #   /etc/sysconfig/syslog
   #
   #    local2.*                       /var/log/haproxy.log
   #
   log         127.0.0.1 local2
   tune.ssl.default-dh-param 2048
   chroot      /var/lib/haproxy
   pidfile     /var/run/haproxy.pid
   maxconn     4000
   user        haproxy
   group       haproxy
   daemon

   # turn on stats unix socket
   stats socket /var/lib/haproxy/stats

defaults
   mode                    http
   log                     global
   option                  httplog
   option                  dontlognull
   option http-server-close
   option forwardfor       except 127.0.0.0/8
   option                  redispatch
   retries                 3
   timeout http-request    10s
   timeout queue           1m
   timeout connect         10s
   timeout client          1m
   timeout server          1m
   timeout http-keep-alive 10s
   timeout check           10s
   maxconn                 3000


frontend SHINY_APP
   bind 1.2.3.40:443 ssl crt /etc/ssl/mycert.pem force-tlsv12
   reqadd X-Forwarded-Proto:\ https
   default_backend SHINY_BACKEND_443
   option httplog
   log global

frontend WEB_API
   bind 1.2.3.40:8443 ssl crt /etc/ssl/mycert.pem force-tlsv12
   reqadd X-Forwarded-Proto:\ https
   default_backend WEBAPI_BACKEND_8443
   option httplog
   log global


backend SHINY_BACKEND_443
   balance     roundrobin
   server      MAIN_SHINY_WEB_1.2.3.41 1.2.3.41:443 weight 100 minconn 80 maxconn 180 inter 5s check ssl verify none
   server      BACKUP_SHINY_WEB_1.2.3.42 1.2.3.42:443 weight 100 minconn 80 maxconn 180 inter 5s check ssl verify none backup

backend WEBAPI_BACKEND_8443
   balance     roundrobin
   server      MAIN_WEB_API_1.2.3.43 1.2.3.43:8443 weight 100 minconn 100 maxconn 200 inter 5s check ssl verify none
   server      BACKUP_WEB_API_1.2.3.44 1.2.3.44:8443 weight 100 minconn 100 maxconn 200 inter 5s check ssl verify none backup



listen stats
   bind 1.2.3.40:9999
   stats enable
   stats hide-version
   stats uri /stats
   stats show-desc Shiny Load Balancer Node
   stats refresh 10s
   stats auth admin:password #just change this to your preffered
   stats scope SHINY_BACKEND_443
   stats scope WEBAPI_BACKEND_8443
   stats admin if TRUE
   stats show-legends


# The above is at Centos 7
# to configure the logging, need to edit the rsyslog.conf to accept connection either tcp or udp and set the local2.* to /var/log/haproxy.log, then restart the service.

#The backup keyword at the backend is the setting for the backup role, while the primary is up, there should be none will be routed to the backup. See the documentation of haproxy for further explanation.

Sunday, February 11, 2018

Installing commercial wildcard ssl cert on zimbra 8

Install Commercial SSL Cert at zimbra server (8.8) opensource

Login as root
su - zimbra

This ssl cert is a wildcard certificate

copy csr key, commcercial.crt commercial-gd_bundle.crt to /tmp of the server

change owner or provide read access to the cert files for zimbra user

do a verify 

cd /tmp >> this is where the files were uploaded

commercial.key - this is the csr or private key
company.com.crt - this is the ssl cert downloaded from ssl provide
company.com.gd_bundle.crt - the gd_bundle.crt file downloaded from ssl provider

Verify first the certificates.

#command to verify

/opt/zimbra/bin/zmcertmgr verifycrt comm commercial.key company.com.crt company.com.gd_bundle.crt

copy /tmp/commercial.key to /opt/zimbra/ssl/zimbra/commercial/commercial.key

once all are ok, proceed with deployment

This server where this ssl now has been deployed was at a mailstore server, so you may see mailstore related services below.

#command to deploy

zmcertmgr deploycrt comm company.com.crt company.com.gd_bundle.crt 

#output below
** Verifying 'company.com.crt' against '/opt/zimbra/ssl/zimbra/commercial/commercial.key'
Certificate 'company.com.crt' and private key '/opt/zimbra/ssl/zimbra/commercial/commercial.key' match.
** Verifying 'company.com.crt' against 'company.com.gd_bundle.crt'
Valid certificate chain: company.com.crt: OK
** Copying 'company.com.crt' to '/opt/zimbra/ssl/zimbra/commercial/commercial.crt'
** Copying 'company.com.gd_bundle.crt' to '/opt/zimbra/ssl/zimbra/commercial/commercial_ca.crt'
** Appending ca chain 'company.com.gd_bundle.crt' to '/opt/zimbra/ssl/zimbra/commercial/commercial.crt'
** Importing cert '/opt/zimbra/ssl/zimbra/commercial/commercial_ca.crt' as 'zcs-user-commercial_ca' into cacerts '/opt/zimbra/common/lib/jvm/java/jre/lib/security/cacerts'
** NOTE: restart mailboxd to use the imported certificate.
** Installing imapd certificate '/opt/zimbra/conf/imapd.crt' and key '/opt/zimbra/conf/imapd.key'
** Copying '/opt/zimbra/ssl/zimbra/commercial/commercial.crt' to '/opt/zimbra/conf/imapd.crt'
** Copying '/opt/zimbra/ssl/zimbra/commercial/commercial.key' to '/opt/zimbra/conf/imapd.key'
** Creating file '/opt/zimbra/ssl/zimbra/jetty.pkcs12'
** Creating keystore '/opt/zimbra/conf/imapd.keystore'
** Installing ldap certificate '/opt/zimbra/conf/slapd.crt' and key '/opt/zimbra/conf/slapd.key'
** Copying '/opt/zimbra/ssl/zimbra/commercial/commercial.crt' to '/opt/zimbra/conf/slapd.crt'
** Copying '/opt/zimbra/ssl/zimbra/commercial/commercial.key' to '/opt/zimbra/conf/slapd.key'
** Creating file '/opt/zimbra/ssl/zimbra/jetty.pkcs12'
** Creating keystore '/opt/zimbra/mailboxd/etc/keystore'
** Installing mta certificate '/opt/zimbra/conf/smtpd.crt' and key '/opt/zimbra/conf/smtpd.key'
** Copying '/opt/zimbra/ssl/zimbra/commercial/commercial.crt' to '/opt/zimbra/conf/smtpd.crt'
** Copying '/opt/zimbra/ssl/zimbra/commercial/commercial.key' to '/opt/zimbra/conf/smtpd.key'
** Installing proxy certificate '/opt/zimbra/conf/nginx.crt' and key '/opt/zimbra/conf/nginx.key'
** Copying '/opt/zimbra/ssl/zimbra/commercial/commercial.crt' to '/opt/zimbra/conf/nginx.crt'
** Copying '/opt/zimbra/ssl/zimbra/commercial/commercial.key' to '/opt/zimbra/conf/nginx.key'
** NOTE: restart services to use the new certificates.
** Cleaning up 3 files from '/opt/zimbra/conf/ca'
** Removing /opt/zimbra/conf/ca/bea3f1c6.0
** Removing /opt/zimbra/conf/ca/ca.key
** Removing /opt/zimbra/conf/ca/ca.pem
** Copying CA to /opt/zimbra/conf/ca
** Copying '/opt/zimbra/ssl/zimbra/ca/ca.key' to '/opt/zimbra/conf/ca/ca.key'
** Copying '/opt/zimbra/ssl/zimbra/ca/ca.pem' to '/opt/zimbra/conf/ca/ca.pem'
** Creating CA hash symlink 'bea3f1c6.0' -> 'ca.pem'
** Creating /opt/zimbra/conf/ca/commercial_ca_1.crt
** Creating CA hash symlink '27eb7704.0' -> 'commercial_ca_1.crt'
** Creating /opt/zimbra/conf/ca/commercial_ca_2.crt
** Creating CA hash symlink 'cbf06781.0' -> 'commercial_ca_2.crt'
** Creating /opt/zimbra/conf/ca/commercial_ca_3.crt
** Creating CA hash symlink 'f081611a.0' -> 'commercial_ca_3.crt'


#Restart service. zmcontrol restart

You may then verify via web ui.

Wednesday, January 31, 2018

Could not open file "pg_clog/0XXX"

I need to vacuum full the database but got this error

mydb=# vacuum full;
ERROR:  could not access status of transaction 376865319
DETAIL:  Could not open file "pg_clog/0167": No such file or directory.





Solution per searching google.


-bash-3.2$ dd if=/dev/zero of=/pgsql/data/pg_clog/0167 bs=256k count=1


Then VACCUM FULL again.

Sunday, January 28, 2018

Fixing iscsi connectivity issue between Centos 6 and Compellent Storage

Fixing iscsi connectivity issue between Centos 6 and Compellent Storage


I boot a Centos 6 server so it can be mounted with a volume from compellent, but got this issue that the HBA interface cannot be detected at the compellent side but check the connectivity, it able to established connection to iscsi target.


I manually connect the host to the compellent server iscsi ip, so below, I have 8 IP to connect

#connecting
for i in 1 2 3 4 5 6 7 8; do iscsiadm --mode discoverydb --type sendtargets --portal 1.2.3.$i --discover; done

#logging in
iscsiadm -m node -l



#connecting
But i encountered at the compellent an issue where the existing initiator name does not show, so I cannot bind the volume to that said host at compellent.

The fix is to rename the initiator name. Use the command


/sbin/iscsi-iname


The above will generate a new name and then put it at /etc/iscsi/initiatorname.iscsi



If you have an existing InitiatorName=iqn.1994-05.com.redhat:fc5b3050bf

and that name either defined on different IP, you may replace that with the new name and restart iscsi service.

Then check at compellent the hba for the host IP equivalent, it should display the new name and the IP and you can now map the volume to the said server. 

Tuesday, January 2, 2018

sftp chroot environment - config and script

sftp chroot environment


Of course, sshd service should be enabled.


Sample Config:

#/etc/ssh/sshd_config
#
Protocol 2
SyslogFacility AUTHPRIV
MaxAuthTries 5
PubkeyAuthentication yes
PasswordAuthentication yes
ChallengeResponseAuthentication no
GSSAPIAuthentication yes
GSSAPICleanupCredentials yes
UsePAM yes
AcceptEnv LANG LC_CTYPE LC_NUMERIC LC_TIME LC_COLLATE LC_MONETARY LC_MESSAGES
AcceptEnv LC_PAPER LC_NAME LC_ADDRESS LC_TELEPHONE LC_MEASUREMENT
AcceptEnv LC_IDENTIFICATION LC_ALL
X11Forwarding yes
Ciphers aes128-ctr,aes192-ctr,aes256-ctr
MACs hmac-sha1,hmac-ripemd160
Subsystem       sftp    internal-sftp

Match Group sftpusers
        ChrootDirectory /home/sftpusers/%u
        ForceCommand internal-sftp
###EOF



Sample sftp adduser script

#/usr/local/sbin/sftp_adduser.sh
#!/bin/bash
#
if [ $# -lt 1 ]; then
        echo "Usage: `basename $0`" sftp username
        echo "   eg. `basename $0` "sftp_something
        exit 1
fi
RANPASS=`date +%s | sha256sum | base64 | head -c 8 ; echo`
USERNAME=$1
SFTPHOME=/home/sftpusers
SFTPSHELL=/usr/libexec/openssh/sftp-server

id $USERNAME 2> /dev/null 1>  /dev/null
if [ $? -eq 0 ]
then
echo "$1 already exist, please try again using another name"
exit

else
        useradd -d $SFTPHOME/$USERNAME -s $SFTPSHELL -p $RANPASS $USERNAME
        echo $RANPASS > /tmp/passwd.tmp001
        sleep 1
        passwd --stdin < /tmp/passwd.tmp001 $USERNAME
        echo $USERNAME >> /etc/listfile/sshusers

        #set permission
        usermod -g sftpusers $USERNAME
        usermod -G sftpusers $USERNAME

        #This is the default folder for sftp users
        mkdir $SFTPHOME/$USERNAME/FILES
     
        #set permissions
        chown root $SFTPHOME/$USERNAME
        chmod go-w $SFTPHOME/$USERNAME
        chown $USERNAME:sftpusers $SFTPHOME/$USERNAME/FILES
        chmod u+rwX $SFTPHOME/$USERNAME
        chmod 755 $SFTPHOME/$USERNAME
        chmod g+rx $SFTPHOME/$USERNAME

        #Details
        echo USER CREATED: $USERNAME
        echo USER DEFINED PASSWORD: $RANPASS
        echo FTP INTERNAL IP: 1.2.3.4
        echo FTP EXTERNAL IP: 111.222.112.221
        echo USER: $USERNAME has been created at ftp 1.2.3.4 | mail -s "created at sftp 1.2.3.4 on `date`" admin@myhost.com
fi

Thursday, May 18, 2017

haproxy sample config used on actual setup


OS : Centos 6.8
Real Servers are CentOS 5.11



#---------------------------------------------------------------------
# Example configuration for a possible web application.  See the
# full configuration options online.
#
#   http://haproxy.1wt.eu/download/1.4/doc/configuration.txt
#
#---------------------------------------------------------------------

#---------------------------------------------------------------------
# Global settings
#---------------------------------------------------------------------
global
    #
    log         127.0.0.1 local2
    chroot      /var/lib/haproxy
    pidfile     /var/run/haproxy.pid
    maxconn     600                                                                                                                                           user        haproxy                                                                                                                                         group       haproxy                                                                                                                                       daemon                                                                                                                                                  
                                                                                                                                                           
    # turn on stats unix socket                                                                                                                             stats socket /var/lib/haproxy/stats                                                                                                                 tune.ssl.default-dh-param 2048                                                                                                                          
#---------------------------------------------------------------------                                                         # common defaults that all the 'listen' and 'backend' sections will                                                                   # use if not designated in their block                                                                                                            
#---------------------------------------------------------------------                                                                                      
defaults                                                                                                                                                    
   mode                    http                                                                                                                            
   log                     global                                                                                                                          
   option tcplog                                                                                                                                          
   option                  dontlognull                                                                                                                  
   option forwardfor       except 127.0.0.0/8                                                                                                              
    option http-server-close                                                                                                                                
    option abortonclose                                                                                                                                    
    option                  redispatch                                                                                                                      
    retries                 3                                                                                                                              
    ####
    timeout http-request    60s
    timeout queue           60m
    timeout connect         300s #s
    timeout client          60m
    timeout server          60m
    timeout http-keep-alive 5s
    timeout check           10s

    #tune.ssl.default-dh-param 2048

frontend WEBAPPS_HTTP
        bind 10.1.1.12:80
        reqadd X-Forwarded-Proto:\ http
        default_backend WEBAPPS_Backend
       
frontend WEBAPPS_HTTPS
        bind 10.1.1.12:443 ssl crt /etc/ssl/webapps.pem force-tlsv12
        reqadd X-Forwarded-Proto:\ https
        default_backend WEBAPPS_Backend

backend WEBAPPS_Backend
        balance roundrobin
        stick-table type ip size 1m expire 1h
        stick on src
        option abortonclose
        option forwardfor except 127.0.0.0/8
        option http-server-close
        option httpchk HEAD /check.html HTTP/1.0
        server WEB_10.1.1.13 10.1.1.13:80 weight 20 minconn 80 maxconn 180 inter 5s check
        server WEB_10.1.1.14 10.1.1.14:80 weight 20 minconn 80 maxconn 180 inter 5s check
        server WEB_10.1.1.16 10.1.1.16:80 weight 20 minconn 80 maxconn 160 inter 5s check

listen  stats 10.1.1.12:8443
        mode            http
        log             global

        maxconn 10

        timeout client     100s
        timeout server     100s
        timeout connect    100s
        timeout queue   100s

        stats enable
        stats hide-version
        stats refresh 10s
        stats show-node
        stats scope WEBAPPS_Backend
        stats auth hauser:Passworditsuptoyou
        stats uri /stats
        stats admin if TRUE

Thursday, April 27, 2017

could not create unique index "pg_statistic_relid_att_inh_index" on postgresql 9.3 - FIXED

I have a newly restored database via symantec backup but got a problem.

Scenario:

  • Postgresql version: 9.3 - Centos
  • Able to access the database, via command line and pgadmin4 only, pgadmin 3 does not work.
  • When creating any table, I cannot create, it will not proceed.
  • When doing command reindex system DBNAME, i am stuck with the error below.

    DBNAME=# reindex system DBNAME;
    NOTICE: table "pg_catalog.pg_class" was reindexed
    ERROR: could not create unique index "pg_statistic_relid_att_inh_index"
    DETAIL: Key (starelid, staattnum, stainherit)=(2610, 15, f) is duplicated.

    LOG:
    Apr 26 14:50:44 stationname postgres[5452]: [10-2] 2017-04-26 14:50:44 PHT postgres DBNAME 127.0.0.1 DETAIL: Key (starelid, staattnum, stainherit)=(2610, 15, f) is duplicated.


    Got a tip from one blog I found to check if there were really a duplicate by sql command below: (link of blog: http://www.pateldenish.com/2016/09/recovering-corrupted-postgres-database.html)

    select starelid, staattnum, stainherit from pg_catalog.pg_statistic where starelid=2610 order by 2;


    DBNAME=# select starelid, staattnum, stainherit from pg_catalog.pg_statistic where starelid=2610 order by 2;
    starelid | staattnum | stainherit
    ----------+-----------+------------
    2610 | 1 | f
    2610 | 2 | f
    2610 | 3 | f
    2610 | 4 | f
    2610 | 5 | f
    2610 | 6 | f
    2610 | 7 | f
    2610 | 8 | f
    2610 | 9 | f
    2610 | 10 | f
    2610 | 11 | f
    2610 | 12 | f
    2610 | 13 | f
    2610 | 14 | f
    2610 | 15 | f
    2610 | 16 | f
    2610 | 17 | f
    2610 | 18 | f
    (18 rows)


    but there wee no duplicates found.

    So since I dont have a choice because I cannot seem to create any additional table, then I may need to restore another one from backup, and since I still have a running backup, so before doing another restore, I tried deleting the reported duplicate by command below:

    DBNAME=# delete from pg_catalog.pg_statistic where starelid=2610 and staattnum=15;
    DELETE 1

    DBNAME=# reindex database DBNAME;
    NOTICE: table "pg_catalog.pg_class" was reindexed
    ERROR: could not create unique index "pg_statistic_relid_att_inh_index"
    DETAIL: Key (starelid, staattnum, stainherit)=(2610, 13, f) is duplicated.

    DBNAME=# delete from pg_catalog.pg_statistic where starelid=2610 and staattnum=13;
    DELETE 1


    After the last deletion of duplicate, I able to reindex system and reindex database and seems those indexes were recreated and my issue on creating any table was fixed.