Pages

Tuesday, October 31, 2023

Security - What are Oracle CPU Patches and Why to Apply them - CRITICAL for Mission Critical Databases


Security - What are Oracle CPU Patches and Why to Apply them - CRITICAL for Mission Critical Databases


Oracle Critical Patch Update (CPU) and Patch Set Updates (PSU) are periodic security patches released by Oracle to address vulnerabilities in their products. These updates typically include fixes for security issues, bug fixes, and sometimes new features or enhancements.


The CPU is a collection of patches that address multiple security vulnerabilities across various Oracle products. It is usually released on a quarterly basis, with additional updates as needed for critical issues. The CPU includes fixes for both Oracle's own code and third-party components used in their products.


The PSU, on the other hand, is a cumulative update that includes all the fixes from the previous CPU and any additional security patches specific to a particular product or component. PSUs are also released on a regular schedule, typically every three months.


It is important for organizations using Oracle products to apply these patches in a timely manner to protect against potential security threats and ensure the stability and performance of their systems. Applying CPU/PSU patches can help mitigate risks associated with known vulnerabilities and maintain compliance with industry standards and regulations.


The procedure to apply Oracle CPU/PSU patches can vary depending on the specific products and versions being used, as well as the environment in which they are deployed. However, here is a general overview of the steps involved:


Identify the patches needed: Determine which products and versions require patching by reviewing the Oracle CPU/PSU advisories and identifying the relevant Common Vulnerabilities and Exposures (CVE) numbers.


Download the patches: Obtain the required patches from the Oracle Support website or other authorized sources.


Create a patching plan: Develop a detailed plan for applying the patches, including any necessary pre-patching activities such as backups, system downtime, and testing.


Prepare the environment: Ensure that the environment meets the prerequisites for patching, such as having sufficient disk space, meeting minimum software requirements, and having the appropriate permissions and access rights.


Apply the patches: Use the appropriate tools and methods to apply the patches, following the instructions provided by Oracle or other authorized sources. This may involve running scripts, applying patches manually, or using automated patching tools.


Test the environment: Perform thorough testing to ensure that the patches have been applied successfully and that the environment is functioning correctly. This may include functional testing, performance testing, and security testing.


Document the patching process: Keep detailed records of the patching process, including the patches applied, the dates and times of application, and any issues encountered during the process.


Monitor the environment: Continuously monitor the environment to ensure that it remains secure and stable after patching. This may involve monitoring logs, system metrics, and other indicators of potential issues.


It's important to note that the specific steps and tools used for patching will depend on the products and versions being patched, as well as the environment in which they are deployed. It's always recommended to follow the official guidance provided by Oracle or other authorized sources when applying patches.

Oracle CPU Announcements:

The Critical Patch Update Advisory serves as the primary resource for reviewing all related advisories, security alerts, and bulletins issued by Oracle. This document provides a comprehensive list of affected products, risk assessments for the vulnerabilities addressed, and links to additional relevant documentation.


Prior to applying patches, it is crucial to thoroughly examine the supporting materials referenced in the Critical Patch Update Advisory.


The next four Critical Patch Update release dates are:


16 January 2024


16 April 2024


16 July 2024


15 October 2024


Where can we find CPU patches ?


We can review the below notes and find out patches that are released to the specific product and review the readme of the patch to understand the patching procedure to be followed.


Critical Patch Update (CPU) Patch Advisor for Oracle Fusion Middleware - Updated for January 2024 (Doc ID 2806740.2)


Critical Patch Update (CPU) Program Oct 2023 Patch Availability Document (DB-only) (Doc ID 2966413.1)




Saturday, September 30, 2023

Database Admin - Kubernetes cheat sheet

 Kubernetes cheat sheet


Listing the below commands which will be useful for administering the kubernetes environment.

 

Version:

# Get kubernetes version: The Server Version line indicates the version of Kubernetes that is running on the Kubernetes cluster.

kubectl version


Pods:

# Get the details of all pods in one namespace

kubectl get pods -n <namespace>


#  Get the details of all pods in all namespaces

kubectl get pods --all-namespaces 


#  Get the details of all pods in the current namespace, with more details

kubectl get pods -o wide  




Deployments:

# Get all  the deployments 

kubectl get deployments


# Check the history of deployments including the revision

Kubectl rollout history deployment <deployment name>


# Rolling  restart of the deployment

Kubectl rollout restart deployment <deployment name>



Interacting with pods:

# Login to pod

Kubectl exec -it <podname> -- bash


#Run the commands in the pod

Kubectl exec <podname> -- ls /





configmap :


# Get all configmap in current namespace

Kubectl get cm


# Edit configmap 

Kubectl edit cm <configmap name>


Secrets:


# Display all the secrets 

Kubectl get secrets


# Display all the secrets 

Kubectl get secrets --all-namespaces 



Services:


# Display all the services 

Kubectl get svc


# List Services Sorted by Name

kubectl get services --sort-by=.metadata.name



Logs:

# Check the logs of the pod

kubectl logs <podname>


# Tail the logs of the pod

kubectl logs -f <podname>


Copy :

# copy  the data from local directory to  the pod

kubectl cp $HOME/data <pod>:/opt/app/data



Storage:


# List all PersistentVolumes

kubectl get pv


# List all PersistentVolumes sorted by capacity

kubectl get pv --sort-by=.spec.capacity.storage


# Describe specific persistent volume

kubectl describe pv <pv_name>


# List all persistent volumes claims

kubectl get pvc


# Describe specific persistent volume claim

kubectl describe pvc <pvc_name>



Create/Delete resources:



# Create specific resource

kubectl apply -f <deployment>.yaml


# Create specific resource using URL

kubectl apply -f https://gcs.io/pod


# Describe specific persistent volume claim

kubectl delete -f <deployment>.yaml



Formatting output:


# Append the following to command to get output in json format 


-o json  


# Append the following to command to get output in yaml format 


-o yaml


# Append the following to command to get output in plain text format with additional information for pods.


-o wide


# Print a table using a comma separated list of custom columns


-o=custom-columns=<spec> 


Cluster details:


# Display the cluster details


kubectl cluster-info


Below bash alias are more useful:


    alias k="kubectl"

    alias allpods="kubectl get pods --all-namespaces"

    alias kc="kubectl create -f"

    alias kg="kubectl get"

    alias pods="kubectl get pods"

    alias ktop="kubectl top nodes"

    alias rcs="kubectl get rc"

    alias sv="kubectl get services"

    alias dep="kubectl get deployment"

    alias kd="kubectl describe"

    alias kdp="kubectl describe pod "

    alias kds="kubectl describe service "

    alias nodes="kubectl get nodes"

    alias klogs="kubectl logs"

    alias ns="kubectl get ns"

    alias deploys="kubectl get deployment"

    alias events="kubectl get events"

    alias kexec="kubectl exec -it "

    alias sec="kubectl get secrets"

 


Friday, August 4, 2023

RCLONE - Data migration from on-premise to cloud using rclone tool

  Data migration from on-premise to cloud using rclone tool


Recently we had a requirement to migrate our on-premise data around 70TB from swift storage to aws s3 bucket.  We have initially explored various options to migrate, out of which we have chosen rclone for data migration, because it supports a wide range of cloud storage providers, including Google Drive, Dropbox, Amazon S3, and Microsoft OneDrive.


Why RCLONE ?


Rclone is mature, open-source software originally inspired by rsync and written in Go. It is multi threaded, very fast in transfer and allows to control the transfer speed.

Here is a summary of the Rclone equivalents to the Unix commands:


rsync: Rclone sync is a powerful tool for synchronizing files and directories between your local machine and the cloud. It can be used to create backups, mirror directories, or simply keep your local and cloud data in sync.

cp: Rclone copy is a simple tool for copying files and directories from your local machine to the cloud or vice versa.

mv: Rclone move is a simple tool for moving files and directories from one cloud storage provider to another.

mount: Rclone mount allows you to mount a cloud storage bucket as a directory on your local machine. This means that you can access your cloud files as if they were stored on your local hard drive.

ls: Rclone ls lists the contents of a cloud storage bucket.

ncdu: Rclone ncdu displays the disk usage of a cloud storage bucket.

tree: Rclone tree displays a tree view of the contents of a cloud storage bucket.

rm: Rclone rm removes files and directories from the cloud.

cat: Rclone cat displays the contents of a file in the cloud.



How to configure rclone and use?


Here am providing the configuration for amazon s3, we can follow the steps outlined below, also refer https://rclone.org/s3/


  1. Download the rclone software for your respective OS - https://rclone.org/downloads/

  2. Run “rclone config”, it will guide you through an interactive setup process.

Source configuration : https://rclone.org/swift/

Target configuration : https://rclone.org/s3/#configuration

  1. Validate the details in the configuration file (~/.config/rclone/rclone.conf) in your local machine and ensure you are able to connect to the source and target using rclone.



$ cat ~/.config/rclone/rclone.conf

[onprem_swift]

type = swift

env_auth = true

user = my_object_store

key =

auth = https://swift.company.com/auth/v1.0

domain =

tenant = my_object_store

tenant_domain =

region =

storage_url =

auth_version =

endpoint_type = public



[cloud-aws-s3]

type = s3

provider = AWS

access_key_id = <aws s3 access key id>

secret_access_key = < secret key>

region = us-east-2

location_constraint = us-east-2

acl = private

storage_class = STANDARD


Display the directories in swift/s3 buckets

[user@host01 ~]$ rclone lsd cloud-aws-s3:prod

           0 2023-10-28 02:33:56        -1 data

           0 2023-10-28 02:33:56        -1 images

[user@host01 ~]$ rclone lsd cloud-aws-s3:prod

           0 2023-10-28 02:33:56        -1 data

           0 2023-10-28 02:33:56        -1 images




  1. Now we are good to transfer the data, let do it


[user@host01 ~]$ rclone sync --progress --transfers=8 --bwlimit=10M onprem_swift:prod cloud-aws-s3:prod


[user@host01 ~] $ ps -ef|grep rclone

user+ 4018633       1  7 00:05 pts/0    00:00:26 rclone sync --progress onprem_swift:prod cloud-aws-s3:prod

user+ 4047166 2812493  0 00:11 pts/0    00:00:00 sh rclone_prod.sh

user+ 4047172 4047166 29 00:11 pts/0    00:00:11 rclone sync --progress onprem_swift:prod cloud-aws-s3:prod

user+ 4050901 2812493  0 00:11 pts/0    00:00:00 grep --color=auto rclone

[user@host01 ~]$



By using the rclone tool, we will have more control in transferring the data in parallel and very easy to use as all the commands are like unix commands. 


Best Practices:


  1. We have tried various options to speed up transfer between swift and aws s3, however we did not get the right speed to transfer. Later we analyzed the details thoroughly and decided to create aws s3 bucket in close to the region that we are in i.e us-east2 region and we have used around 5 machines to transfer in parallel, these significantly helped to speed up the migration process with average speed between 40mb/sec and 90mb/sec. 


  1. I recommend to use its latest version, it has a great support community  which we can make use of for posting our issues to get immediate assistance from the forum.

Wednesday, August 2, 2023

.


Friday, June 30, 2023

Oracle Database 23C - Interesting Features

 Oracle Database 23C - Interesting Features:


Oracle Database 23C is most awaited and has lot of key features, It's one stop database for modern needs of NoSQL and relational databases, it provides most sought after features such as.


Take Away - Oracle Database 23C supports all the key features of database without need for specialized databases.

JSON:

JSON-relational duality views combine the advantages of using JSON documents with the advantages of the relational model, while avoiding the limitations of each.

Simple SQL 

Interesting features on how SQL is used and made lot of syntax and enhancements with the applications and user support for the database access.

Graph Support

Support for the graph and relationship in oracle database. 

Useful link - https://docs.oracle.com/en/database/oracle/oracle-database/23/nfcoa/#Oracle%C2%AE-Database

JSON - https://blogs.oracle.com/database/post/json-relational-duality-app-dev#:~:text=The%20new%20feature%20in%20Oracle,a%20JSON%20Relational%20Duality%20View.&text=Using%20Duality%20Views%2C%20data%20is,JSON%20documents%20(figure%202).