Pages

Monday, March 25, 2024

Oracle 23C New Features and Release Dates



Oracle Database 23c introduces several new features and enhancements across various areas. Here are some of the key new features in Oracle Database 23c:

Oracle Database 23C Release Dates:

Oracle typically releases new versions of its database software in a staggered manner, with the cloud (Oracle Cloud Infrastructure) releases preceding the on-premises releases. Here are the expected release dates for Oracle Database 23c:

Cloud Release:

Oracle Autonomous Database 23c: This version was released and made generally available on the Oracle Cloud Infrastructure (OCI) on May 16, 2023.

On-Premises Release:

Oracle Database 23c: The on-premises version of Oracle Database 23c is expected to be released in H1-2024. However, Oracle has not yet announced the specific release date.

Initially available on OCI Base Database Service. Will become available on other platforms (cloud and on-premises) starting 1H CY2024

It's important to note that these dates are subject to change, and Oracle may adjust the release timelines based on various factors, such as development progress, testing, and customer feedback.

Once the on-premises version of Oracle Database 23c is released, customers with active support subscriptions will be able to download and install it on their on-premises infrastructure or private clouds. Oracle typically provides support for new database releases for several years, including critical patch updates, security fixes, and performance enhancements.

It is advised to consult Oracle's official documentation, release notes, and support channels for the most up-to-date information on the Oracle Database 23c release dates, system requirements, and any specific considerations for their environments.

Oracle 23C New Features:

Autonomous JSON:

Native storage and processing of JSON data within the database.

Faster ingestion, querying, and analysis of semi-structured data.

Support for JSON data types, functions, and SQL/JSON path expressions.

Blockchain Tables:

Tamper-resistant and immutable storage of data using blockchain technology.

Ensures data integrity, transparency, and non-repudiation.

Suitable for applications requiring auditing, regulatory compliance, or secure transaction records.

Autoindex:

Automatic creation and management of indexes based on workload patterns and queries.

Leverages machine learning algorithms to identify optimal indexes.

Improves query performance without manual index management.

Converged Database:

Combines the capabilities of Autonomous Data Warehouse (ADW) and Autonomous Transaction Processing (ATP).

Allows running mixed workloads (analytical and transactional) within a single database instance.

Simplifies data management and reduces operational overhead.

Enhanced Machine Learning:

Improved machine learning capabilities with support for more algorithms.

Better model explanations and integration with Oracle Analytics Cloud.

Enables advanced data analysis and predictive modeling within the database.

Query Performance Optimizations:

Enhanced parallelism and in-memory processing.

Optimizations for specific workloads like Star Queries and Hash Joins.

Improved query execution times for analytical and reporting workloads.

Increased Storage Capacity:

Support for larger database instances to handle growing data volumes.

Ability to provision more storage capacity as needed.

Security and Compliance Enhancements:

Support for Oracle Database Vault for improved data security and access control.

Enhanced auditing capabilities and data masking options.

Compliance with industry standards and regulations.

Database Memory Management Improvements:

Automatic Memory Management enhancements for better memory utilization.

Improved performance for memory-intensive workloads.

SQL Enhancements:

New SQL functions and syntax improvements.

Support for additional data types and improved compatibility with other database systems.

These new features in Oracle Database 23c aim to enhance performance, security, data processing capabilities, and workload consolidation. Additionally, Oracle continues to optimize the database for cloud deployments, with a focus on automation, scalability, and self-management capabilities. 



Monday, March 11, 2024

MySQL Router with NDB Cluster

MySQL Router with NDB Cluster


There is a general question on  can we use MySQL router for NDB Cluster?

MySQL Router can be used with MySQL NDB Cluster to provide intelligent routing capabilities and improve the overall availability and scalability of the cluster deployment.

In an NDB Cluster setup, MySQL Router can act as a middleware component between the application clients and the cluster data nodes. It automatically discovers the topology of the cluster and routes client requests to the appropriate data nodes based on the operation type (read or write) and the current state of the cluster.


Here are some key benefits of using MySQL Router with MySQL NDB Cluster:

1. Read/Write Splitting: MySQL Router can automatically identify read and write queries from the application clients. It routes write queries to the primary data node (or group of data nodes in a multi-primary setup), while read queries are distributed across the remaining data nodes. This read/write splitting improves overall throughput and scalability by utilizing the resources of multiple data nodes for read operations.

2. Load Balancing: MySQL Router can perform load balancing for read queries by distributing them across the available data nodes. This helps in evenly distributing the read load and preventing any single data node from becoming a bottleneck.

3. Failover and High Availability: In case of a data node failure or maintenance operation, MySQL Router can automatically detect the change in cluster topology and redirect client traffic to the remaining available data nodes. This ensures high availability and minimizes disruption to client applications.

4. Connection Pooling: MySQL Router can maintain a pool of persistent connections to the data nodes, reducing the overhead of establishing new connections for each client request. This can improve overall performance, especially in scenarios with a high number of short-lived client connections.

5. Simplified Client Configuration: By using MySQL Router, application clients no longer need to handle the complexities of connecting to multiple data nodes or managing failover scenarios. They can connect to a single entry point (MySQL Router), which transparently routes their requests to the appropriate data nodes.

To use MySQL Router with MySQL NDB Cluster, you need to configure MySQL Router with the appropriate metadata and connection details for your NDB Cluster deployment. MySQL Router can then automatically discover the cluster topology and start routing client traffic accordingly.

It's important to note that while MySQL Router can provide intelligent routing and load balancing capabilities, it does not replace the need for proper cluster configuration and management. MySQL NDB Cluster still requires careful planning and configuration to ensure data consistency, high availability, and optimal performance.

Monday, March 4, 2024

MySQL Router 8.3

 

MySQL Router 8.3 

MySQL Router 8.3 is the latest version of the middleware component that provides transparent routing capabilities between MySQL clients and MySQL servers. It is designed to simplify the deployment and management of highly available and scalable MySQL deployments, such as InnoDB Clusters and replication setups.
One of the key features of MySQL Router 8.3 is its enhanced routing capabilities for InnoDB Clusters. It can automatically detect the primary and secondary instances within a cluster and route read and write operations accordingly. Write operations are directed to the primary instance, while read operations are distributed across the secondary instances, providing improved read scalability and high availability.
MySQL Router 8.3 also offers improved support for asynchronous replication setups. It can route write operations to the replication source and read operations to the replicas, ensuring data consistency and efficient load distribution. Additionally, it supports automatic failover and switchover scenarios, seamlessly redirecting traffic to the new primary instance or replication source in case of failures or maintenance operations.
Another notable enhancement in MySQL Router 8.3 is its support for MySQL Group Replication. It can automatically detect the members of a Group Replication setup and route traffic to the appropriate nodes based on their roles (primary or secondary). This feature simplifies the deployment and management of highly available MySQL setups using Group Replication.
MySQL Router 8.3 also includes improvements in performance, security, and configuration options. It supports TLS encryption for secure communication between clients and servers, as well as improved connection pooling and caching mechanisms for better performance.
Overall, MySQL Router 8.3 aims to simplify the deployment and management of highly available and scalable MySQL deployments, providing transparent routing capabilities, automatic failover handling, and improved performance and security features.

Performance Recommendations

For best performance, MySQL Router is typically installed on the same host as the application that uses it. 

This helps to reduce the latancy and improve performence.





Read/Write Splitting


At a large scale, read operations are distributed across multiple replicas to enhance performance and availability. However, this necessitates a mechanism within the application to direct write operations to a specific location while routing read operations to other destinations. With the introduction of MySQL 8.2, MySQL Router offers a solution by automatically identifying read and write queries. For InnoDB Clusters, it routes write operations to the Primary Instances and read operations to secondary instances or replicas. In the case of asynchronous replication setups, write queries are directed to the replication source, while read queries are forwarded to the replicas or secondary instances.

Monday, February 12, 2024

Oracle Database TDE in OCI Cloud

 Oracle Database TDE in OCI Cloud vs OnPrem


Oracle's Transparent Data Encryption (TDE) is a powerful data-at-rest encryption feature that can be used to secure Oracle databases in the Oracle Cloud Infrastructure (OCI) environment. TDE provides a robust layer of protection for sensitive data stored in database files, backups, and other storage media.

TDE on-prem is part of advance licensing model which customers need to license separately, where as in Cloud TDE is part of the database license, we can use BYOL for database and TDE is offered as part of the native database license in Cloud. 

This is change in the way licensing is done as in Cloud all the databases are TDE enabled by default and there is no option to skip the TDE in the OCI Cloud.

Leverage TDE benefits in the OCI cloud without need to pay extra licensing :) 

To Recap TDE benefits:

  1. TDE provides a strong layer of data protection by encrypting sensitive data at rest, including database files, backups, and exports.
  2. This helps organizations meet various compliance requirements and industry regulations related to data security, such as PCI-DSS, HIPAA, and GDPR.
  3. In a cloud environment, data is stored on shared infrastructure, and TDE ensures that even if an unauthorized party gains access to the storage media, the data remains encrypted and unreadable.



Monday, January 22, 2024

Oracle Zero Downtime Migration (ZDM) - Installation

Oracle Zero Downtime Migration (ZDM) - Installation


ZDM - Install ZDM 21.4

Requirements:

- Minimum OS version Oracle Linux 7

Install Pre-Req:

[root@zdmhost01]# yum install glibc-devel expect libnsl ncurses-compat-libs

Install ZDM:

[oracle@zdmhost01]# cd /opt/zdmsoftware/zdm21.4.1/

[oracle@zdmhost01]# ./zdminstall.sh setup oraclehome=/opt/app/zdm oraclebase=/opt/app/base

ziploc=/opt/zdmsoftware/zdm21.4.1/zdm_home.zip

oracle@zdmhost01]# cd /opt/zdm21.4.1/app/zdm/bin

oracle@zdmhost01]$ ./zdmservice start

No instance detected, starting zdmservice

spawn /opt/app/zdm/mysql/server/bin/mysqladmin

--defaults-file=/opt/app/base/crsdata/instance-20231111-1137/rhp/conf/my.cnf -u root -p

ping


ZDM - Setting up ZDM Machine

Setup Passwordless SSH.

[oracle@zdmhost01 ~]$ cat .ssh/id_rsa.pub

#on the source database host as user oracle

#Add ZDM public keys in source and target host.

[oracle@sourcedb01 ~]$ vi .ssh/authorized_keys

#insert the public key and save the changes

[opc@targetdb01 ~]$ vi .ssh/authorized_keys

#insert the public key and save the changes

#add the following in source, target and ZDM machines.

[root@zdmhost01 ~]# vi /etc/hosts

#add the following entries

10.10.10.01 sourcedb

10.10.10.02 targetdb

ZDM - Prepare Source Database

Turn on FORCE LOGGING at the primary database.

SQL> alter database force logging;

SQL> select force_logging from v$database;

FORCE_LOGGING

---------------------------------------

YES

-- Enable ARCHIVELOG mode for the database.

SQL> select log_mode from v$database;

LOG_MODE

------------

ARCHIVELOG

-- For Oracle Database 12c Release 2 and later, it is mandatory to configure TDE.

SQL> select wrl_type, status from v$encryption_wallet;

WRL_TYPE STATUS

-------------------- ------------------------------

FILE OPEN

-- Set RMAN CONFIGURE CONTROLFILE AUTOBACKUP to ON.

[oracle@sourcedb01 ~]$ rman target /

RMAN> CONFIGURE CONTROLFILE AUTOBACKUP ON;

ZDM - Prepare Target Database

01 Ensure source and Target has different db_unique_name

02 Configure listener in the Target Node.

03 Make sure password file is created and same password for SYS for source and target

04 Set SQL*Net connectivity between source and target database servers.


ZDM - Migration Response File

[oracle@zdmhost01 ~]$ vi /opt/zdmsoftware/physical_online.rsp

MIGRATION_METHOD=ONLINE_PHYSICAL

DATA_TRANSFER_MEDIUM=OSS

HOST=https://cloud.oracle.com/object-storage/buckets/ax6yc1uoj1112/bucket-12234

/objects?region=us-phoenix-1

OPC_CONTAINER=bucket-12234

TGT_DB_UNIQUE_NAME=TARGETDB

PLATFORM_TYPE=VMDB


ZDM - Initiate Migration

[oracle@zdmhost01 ~]$ $ZDMHOME/bin/zdmcli migrate database -rsp /opt/zdmsoftware/physical_online.rsp \

-sourcesid srcdb \

-sourcenode source \

-srcauth zdmauth \

-srcarg1 user:oracle \

-srcarg2 identity_file:/home/oracle/.ssh/id_rsa \

-srcarg3 sudo_location:/usr/bin/sudo \

-targetnode target \

-tgtauth zdmauth \

-tgtarg1 user:opc \

-tgtarg2 identity_file:/home/opc/.ssh/id_rsa \

-tgtarg3 sudo_location:/usr/bin/sudo \

-targethome /u01/app/dbhome \

-backupuser "oracleidentitycloudservice/naresh@xyz.com" \

-eval

Enter source database srcdb SYS password:

Enter user "oracleidentitycloudservice/naresh@xyz.com" password:

zdmhost01: 2023-10-14T07:59:53.865Z : Processing response file ...

Operation "zdmcli migrate database" scheduled with the job ID "1".


ZDM - Check the Status

$ZDMHOME/bin/zdmcli query job -jobid

Job ID: 1

User: oracle

Client: zdmhost01

Job Type: "EVAL"

...

Current status: SUCCEEDED

Current Phase: "ZDM_GET_SRC_INFO"

...

# For checking the eval job id's

$ZDM_HOME/bin/zdmcli query job -eval

# For checking the migrate job id's

$ZDM_HOME/bin/zdmcli query job -migrate

ZDM - Run Actual Migration

[oracle@zdmhost01 ~]$ $ZDMHOME/bin/zdmcli migrate database -rsp

/opt/zdmsoftware/physical_online.rsp \

-sourcesid srcdb \

-sourcenode source \

-srcauth zdmauth \

-srcarg1 user:oracle \

-srcarg2 identity_file:/home/oracle/.ssh/id_rsa \

-srcarg3 sudo_location:/usr/bin/sudo \

-targetnode target \

-tgtauth zdmauth \

-tgtarg1 user:opc \

-tgtarg2 identity_file:/home/opc/.ssh/id_rsa \

-tgtarg3 sudo_location:/usr/bin/sudo \

-targethome /u01/app/dbhome \

-backupuser "oracleidentitycloudservice/naresh@xyz.com" \

-pauseafter ZDM_CONFIGURE_DG_SRC

Enter source database srcdb SYS password:

Enter user "oracleidentitycloudservice/naresh@xyz.com" password:

zdmhost01: 2023-10-14T13:59:53.865Z : Processing response file ...

Operation "zdmcli migrate database" scheduled with the job ID "2".

ZDM - Pause and Complete the Migration

[oracle@zdmhost01 ~]$ $ZDMHOME/bin/zdmcli query job -jobid 2

...

Job Type: "MIGRATE"

...

Current status: PAUSED

SQL> select database_role from v$database;

DATABASE_ROLE

----------------

PHYSICAL STANDBY

# Complete the migration.

[oracle@zdmhost01 ~]$ $ZDMHOME/bin/zdmcli resume job -jobid 2

[opc@target ~]$ sqlplus / as sysdba

SQL> select database_role from v$database;

DATABASE_ROLE

----------------

PHYSICAL STANDBY