Pages

Friday, August 4, 2023

RCLONE - Data migration from on-premise to cloud using rclone tool

  Data migration from on-premise to cloud using rclone tool


Recently we had a requirement to migrate our on-premise data around 70TB from swift storage to aws s3 bucket.  We have initially explored various options to migrate, out of which we have chosen rclone for data migration, because it supports a wide range of cloud storage providers, including Google Drive, Dropbox, Amazon S3, and Microsoft OneDrive.


Why RCLONE ?


Rclone is mature, open-source software originally inspired by rsync and written in Go. It is multi threaded, very fast in transfer and allows to control the transfer speed.

Here is a summary of the Rclone equivalents to the Unix commands:


rsync: Rclone sync is a powerful tool for synchronizing files and directories between your local machine and the cloud. It can be used to create backups, mirror directories, or simply keep your local and cloud data in sync.

cp: Rclone copy is a simple tool for copying files and directories from your local machine to the cloud or vice versa.

mv: Rclone move is a simple tool for moving files and directories from one cloud storage provider to another.

mount: Rclone mount allows you to mount a cloud storage bucket as a directory on your local machine. This means that you can access your cloud files as if they were stored on your local hard drive.

ls: Rclone ls lists the contents of a cloud storage bucket.

ncdu: Rclone ncdu displays the disk usage of a cloud storage bucket.

tree: Rclone tree displays a tree view of the contents of a cloud storage bucket.

rm: Rclone rm removes files and directories from the cloud.

cat: Rclone cat displays the contents of a file in the cloud.



How to configure rclone and use?


Here am providing the configuration for amazon s3, we can follow the steps outlined below, also refer https://rclone.org/s3/


  1. Download the rclone software for your respective OS - https://rclone.org/downloads/

  2. Run “rclone config”, it will guide you through an interactive setup process.

Source configuration : https://rclone.org/swift/

Target configuration : https://rclone.org/s3/#configuration

  1. Validate the details in the configuration file (~/.config/rclone/rclone.conf) in your local machine and ensure you are able to connect to the source and target using rclone.



$ cat ~/.config/rclone/rclone.conf

[onprem_swift]

type = swift

env_auth = true

user = my_object_store

key =

auth = https://swift.company.com/auth/v1.0

domain =

tenant = my_object_store

tenant_domain =

region =

storage_url =

auth_version =

endpoint_type = public



[cloud-aws-s3]

type = s3

provider = AWS

access_key_id = <aws s3 access key id>

secret_access_key = < secret key>

region = us-east-2

location_constraint = us-east-2

acl = private

storage_class = STANDARD


Display the directories in swift/s3 buckets

[user@host01 ~]$ rclone lsd cloud-aws-s3:prod

           0 2023-10-28 02:33:56        -1 data

           0 2023-10-28 02:33:56        -1 images

[user@host01 ~]$ rclone lsd cloud-aws-s3:prod

           0 2023-10-28 02:33:56        -1 data

           0 2023-10-28 02:33:56        -1 images




  1. Now we are good to transfer the data, let do it


[user@host01 ~]$ rclone sync --progress --transfers=8 --bwlimit=10M onprem_swift:prod cloud-aws-s3:prod


[user@host01 ~] $ ps -ef|grep rclone

user+ 4018633       1  7 00:05 pts/0    00:00:26 rclone sync --progress onprem_swift:prod cloud-aws-s3:prod

user+ 4047166 2812493  0 00:11 pts/0    00:00:00 sh rclone_prod.sh

user+ 4047172 4047166 29 00:11 pts/0    00:00:11 rclone sync --progress onprem_swift:prod cloud-aws-s3:prod

user+ 4050901 2812493  0 00:11 pts/0    00:00:00 grep --color=auto rclone

[user@host01 ~]$



By using the rclone tool, we will have more control in transferring the data in parallel and very easy to use as all the commands are like unix commands. 


Best Practices:


  1. We have tried various options to speed up transfer between swift and aws s3, however we did not get the right speed to transfer. Later we analyzed the details thoroughly and decided to create aws s3 bucket in close to the region that we are in i.e us-east2 region and we have used around 5 machines to transfer in parallel, these significantly helped to speed up the migration process with average speed between 40mb/sec and 90mb/sec. 


  1. I recommend to use its latest version, it has a great support community  which we can make use of for posting our issues to get immediate assistance from the forum.

Wednesday, August 2, 2023

.


Friday, June 30, 2023

Oracle Database 23C - Interesting Features

 Oracle Database 23C - Interesting Features:


Oracle Database 23C is most awaited and has lot of key features, It's one stop database for modern needs of NoSQL and relational databases, it provides most sought after features such as.


Take Away - Oracle Database 23C supports all the key features of database without need for specialized databases.

JSON:

JSON-relational duality views combine the advantages of using JSON documents with the advantages of the relational model, while avoiding the limitations of each.

Simple SQL 

Interesting features on how SQL is used and made lot of syntax and enhancements with the applications and user support for the database access.

Graph Support

Support for the graph and relationship in oracle database. 

Useful link - https://docs.oracle.com/en/database/oracle/oracle-database/23/nfcoa/#Oracle%C2%AE-Database

JSON - https://blogs.oracle.com/database/post/json-relational-duality-app-dev#:~:text=The%20new%20feature%20in%20Oracle,a%20JSON%20Relational%20Duality%20View.&text=Using%20Duality%20Views%2C%20data%20is,JSON%20documents%20(figure%202).



Friday, March 10, 2023

Oracle EBS - How to setup DMZ HTTP Reverse Proxy Server

 Oracle EBS - How to setup DMZ HTTP Reverse Proxy Server


1. Install apache on reverse proxy server.


1.1 Download apache source file : http://apr.apache.org/download.cgi

httpd-2.4.34.tar.gz


mkdir -p /opt/app/software

copy all install packages to /opt/app/software


1.2 Download apache dependent files apr & apr-utility :

apr-1.6.3.tar.gz

apr-util-1.6.1.tar.gz


1.3 Download PCRE ftp://ftp.csx.cam.ac.uk/pub/software/programming/pcre/

pcre2-10.31.tar.gz

pcre-8.42.tar.gz


1.4 Before apache install, check C, C++, Libtools, expat-devel and apr-devel compilers.

yum install -y mlocate

updatedb

yum list installed libgcc

yum remove libgcc.i686

yum install -y libgcc.x86_64 gcc-c++.x86_64 gcc.x86_64 compat-gcc-44.x86_64 compat-gcc-44-c++.x86_64


1.5 Download and install openssl which is used by apache


openssl-1.0.2o.tar.gz

Untar file in /opt/app/software


cd openssl-1.0.2o

./config --prefix=/usr/local/openssl -fPIC

make

make install

which openssl -- /bin/openssl

openssl, exit


1.6 Extract and install autoconf

Pre requisite rpms for autoconf : m4.x86_64, perl-ExtUtils-MakeMaker, Data-Dumper-2.161.tar.gz


yum install -y m4.x86_64

yum install -y perl-ExtUtils-MakeMaker

cd /opt/app/software/Data-Dumper-2.161

perl Makefile.PL

make

make install


cd ../autoconf-2.69

./configure --prefix=/usr/local/autoconf

make

make install

which autoconf


1.7 Install libtool

yum install -y libtool.x86_64


1.8 Extract and Install apr, apr-util & pcre.

Untar files in /opt/app/software and rename files by removing version numbers.


tar -xvzf apr-1.6.3.tar.gz

tar -xvzf apr-util-1.6.1.tar.gz

tar -xvzf pcre-8.42.tar.gz

mv apr-1.6.3 apr

mv apr-util-1.6.1 apr-util

mv pcre-8.42 pcre

cd ./apr

./configure

make clean

make

make install

cd ../apr-util

yum install -y imlib.x86_64

yum install -y expat-devel.x86_64 expat.x86_64

./configure --with-apr=/usr/local/apr/bin/apr-1-config

make clean

make

make install

cd ../pcre

./configure --prefix=/usr/local/pcre

make clean

make

make install


1.9 Apache Installation

cd /opt/app/software/httpd-2.4.34

./buildconf

./configure --prefix=/opt/app/dmz --with-including-apr --with-pcre=/usr/local/pcre --with-ssl=/usr/local/openssl --enable-so --enable-mods-shared="ssl proxy proxy_http proxy_ftp proxy_connect headers"

make clean

make

make install

For some reason, if you run into issues during cnfiguring/make , run below buildconf to run configure command again otherwise latest changes won't be affective.

./buildconf


2.0 mod_security for apache Installation:

Download and Install modsecurity-2.9.0.tar.gz

Prerequisites:

yum install libtool.x86_64 -- already installed

yum install -y libxml2-devel.x86_64

Untar and Install

mkdir -p /opt/app/dmz/mod_security

cd /opt/app/software

tar -zxvf modsecurity-2.9.0.tar.gz

cd modsecurity-2.9.0


export PATH=/usr/local/openssl:/usr/local/autoconf/bin:/usr/local/libtool/bin:$PATH

./autogen.sh

./configure --prefix=/opt/app/dmz/mod_security --with-apxs=/opt/app/dmz/bin/apxs

make

make install


2.1 Make sure mod_security2.so files generated in apache directory.

cd /opt/app/dmz/modules

ls -lrt mod_security2.so

cd /opt/app/dmz/mod_security

ls -lrt

Start the apache and check.

cd /opt/app/dmz/bin

ps -ef|grep httpd

./apachectl start

ps -ef|grep httpd

./apachectl stop

1.7.6. Enable mod_security module in httpd.conf.

LoadModule security2_module modules/mod_security2.so

1.8. Now start the services and make sure there are no issues.

2. Add proxy pass entries to httpd.conf

Note: Disable below SSL parameters:

         SSLProxyCheckPeerCN off

         SSLProxyCheckPeerName off

3. Setup url_fw.conf from external node to proxy ${APACHE_HOME}/conf.

4. Enable only Isupplier related url's in url_fw.conf.


Friday, February 24, 2023

Kubernetes Cheat Sheet

 Kubernetes cheat sheet


Listing the below commands which will be useful for administering the kubernetes environment.

 

Version:

# Get kubernetes version: The Server Version line indicates the version of Kubernetes that is running on the Kubernetes cluster.

kubectl version


Pods:

# Get the details of all pods in one namespace

kubectl get pods -n <namespace>


#  Get the details of all pods in all namespaces

kubectl get pods --all-namespaces 


#  Get the details of all pods in the current namespace, with more details

kubectl get pods -o wide  




Deployments:

# Get all  the deployments 

kubectl get deployments


# Check the history of deployments including the revision

Kubectl rollout history deployment <deployment name>


# Rolling  restart of the deployment

Kubectl rollout restart deployment <deployment name>



Interacting with pods:

# Login to pod

Kubectl exec -it <podname> -- bash


#Run the commands in the pod

Kubectl exec <podname> -- ls /





configmap :


# Get all configmap in current namespace

Kubectl get cm


# Edit configmap 

Kubectl edit cm <configmap name>


Secrets:


# Display all the secrets 

Kubectl get secrets


# Display all the secrets 

Kubectl get secrets --all-namespaces 



Services:


# Display all the services 

Kubectl get svc


# List Services Sorted by Name

kubectl get services --sort-by=.metadata.name



Logs:

# Check the logs of the pod

kubectl logs <podname>


# Tail the logs of the pod

kubectl logs -f <podname>


Copy :

# copy  the data from local directory to  the pod

kubectl cp $HOME/data <pod>:/opt/app/data



Storage:


# List all PersistentVolumes

kubectl get pv


# List all PersistentVolumes sorted by capacity

kubectl get pv --sort-by=.spec.capacity.storage


# Describe specific persistent volume

kubectl describe pv <pv_name>


# List all persistent volumes claims

kubectl get pvc


# Describe specific persistent volume claim

kubectl describe pvc <pvc_name>



Create/Delete resources:



# Create specific resource

kubectl apply -f <deployment>.yaml


# Create specific resource using URL

kubectl apply -f https://gcs.io/pod


# Describe specific persistent volume claim

kubectl delete -f <deployment>.yaml



Formatting output:


# Append the following to command to get output in json format 


-o json  


# Append the following to command to get output in yaml format 


-o yaml


# Append the following to command to get output in plain text format with additional information for pods.


-o wide


# Print a table using a comma separated list of custom columns


-o=custom-columns=<spec> 


Cluster details:


# Display the cluster details


kubectl cluster-info


Below bash alias are more useful:


    alias k="kubectl"

    alias allpods="kubectl get pods --all-namespaces"

    alias kc="kubectl create -f"

    alias kg="kubectl get"

    alias pods="kubectl get pods"

    alias ktop="kubectl top nodes"

    alias rcs="kubectl get rc"

    alias sv="kubectl get services"

    alias dep="kubectl get deployment"

    alias kd="kubectl describe"

    alias kdp="kubectl describe pod "

    alias kds="kubectl describe service "

    alias nodes="kubectl get nodes"

    alias klogs="kubectl logs"

    alias ns="kubectl get ns"

    alias deploys="kubectl get deployment"

    alias events="kubectl get events"

    alias kexec="kubectl exec -it "

    alias sec="kubectl get secrets"