How to fix: ERROR 1045 (28000): Access denied for user ‘root’@’localhost’ (using password: YES)

Open your terminal and type

mysql -u root -p

Enter your password.

Hopefully your MySQL is logged in now.

Ansible...Installation Process

In this chapter, we will learn about the environment setup of Ansible.

Installation Process 

Mainly, there are two types of machines when we talk about Ansible deployment

Control machine − Machine from where we can manage other machines.
Remote machine − Machines which are handled/controlled by control machine.

Multiple remote machines can be handled by one control machine. So for managing remote machines we have to install Ansible on control machine.

Control Machine Requirements

Ansible can be run from any machine with Python 2 or higher version of Python installed.

By default, Ansible uses ssh to manage remote machine.

Ansible does not add any database. It does not require any daemons to start or keep it running.
While managing remote machines, Ansible does not leave any software installed or running on them. Hence, there is no question of how to upgrade it when moving to a new version.

Ansible can be installed on control machine which have above mentioned requirements in different ways.
You can install the latest release through Apt, yum, pkg, pip etc

Note − Windows does not support control machine.

Ansible...Automation Tool

Introduction

Ansible is an open source, powerful automation software for configuring, managing and deploying software applications on nodes without any downtime just by using SSH
Ansible is simple open source IT engine which automates application deployment
Ansible is easy to deploy because it does not use any agents or custom security infrastructure.
Ansible is designed for multi-tier deployment
After connecting to your nodes, Ansible pushes small programs called as “Ansible Modules”
Ansible runs that modules on your nodes and removes them when finished. Ansible manages your inventory in simple text files (These are the hosts file)
Ansible uses the hosts file where one can group the hosts and can control the actions on a specific group in the playbooks

How Ansible works

There are many similar automation tools available like Puppet, Chef, Salt etc, but Ansible categorize into two types of server: controlling machines and nodes

The controlling machine, where Ansible is installed and Nodes are managed by this controlling machine over SSH
The location of nodes are specified by controlling machine through its inventory.

Ansible is agent-less, that means no need of any agent installation on remote nodes
Ansible can handle multiple nodes from a single system over SSH connection and the entire operation can be handled and executed by single command ‘ansible’

In some cases, where you required to execute multiple commands for a deployment, here we can build playbooks
Playbooks are bunch of commands which can perform multiple tasks and each playbooks are in YAML file format

YAML (It’s a human-readable data serialization language and is commonly used for configuration files, but could be used in many applications where data is being stored) which is very easy for humans to understand, read and write.
Hence the advantage is that even the IT infrastructure support guys can read and understand the playbook and debug if needed (YAML – It is in human readable form)

Use of Ansible

Ansible can be used in IT infrastructure to manage and deploy software applications to remote nodes
For example, let’s say you need to deploy a single software or multiple software to multiple nodes by a single command,
Here Ansible comes into picture, with the help of Ansible you can deploy as many applications to multiple nodes with one single command. You must have a little programming knowledge for understanding ansible script

Setup NFS Server on CentOS 8 / RHEL 8

NFS stands for Network File System, helps you to share files and folders between Linux / Unix systems
NFS enables you to mount a remote share locally.

This guide helps you to setup NFS server on CentOS 8 / RHEL 8

Benefits of NFS

•File / Folder sharing between UNIX / Linux systems
•Allows to mount remote file systems locally
•Can act as Centralized Storage system
•Can be used as a Storage Domain (Datastore) for VMware and other Virtualization Platform.
•Allows applications to share configuration and data files with multiple nodes.
•Allows having updated files across the share.

Important Services

The following are important NFS services, included in nfs-utils packages.

rpcbind: The rpcbind server converts RPC program numbers into universal addresses.
nfs-server: It enables clients to access NFS shares.
nfs-lock / rpc-statd: NFS file locking. Implement file lock recovery when an NFS server crashes and reboots
nfs-idmap: It translates user and group ids into names and to translate user and group names into id's

Important Configuration Files

You would be working mainly on below configuration files to setup NFS server and Clients.

/etc/exports: It is the main configuration file, controls which file systems are exported to remote hosts and specifies options.
/etc/fstab: This file is used to control what file systems including NFS directories are mounted when the system boots.
/etc/sysconfig/nfs: This file is used to control which ports the required RPC services run on.
/etc/hosts.allow and /etc/hosts.deny: These files are called TCP wrappers, controls the access to  NFS server
It is used by NFS to decide whether or not to accept a connection coming in from another IP address.

Environment

Here, we will use CentOS 8 minimal for this demo. This guide should also work on Oracle Linux and Fedora systems

NFS Server

Host Name: server.local.com(CentOS 8)
IP Address: 192.168.0.170/24

NFS Client

Host Name: client.local.com (CentOS 8)
IP Address: 192.168.0.171/24

Configure NFS Server

Install NFS Server

Install the below package for NFS server using yum command.
yum install -y nfs-utils

Once the packages are installed, enable and start NFS services.
systemctl start nfs-server rpcbind
systemctl enable nfs-server rpcbind

Create NFS Share

Now, let’s create a directory to share with the NFS client. Here I will be creating a new directory named nfsfileshare in the / partition.

You can also share your existing directory with NFS
mkdir /nfsfileshare

Allow NFS client to read and write to the created directory.
chmod 777 /nfsfileshare/

We have to modify /etc/exports file to make an entry of directory /nfsfileshare that you want to share.
vi /etc/exports

Create a NFS share something like below.
/nfsfileshare 192.168.0.171(rw,sync,no_root_squash)

/nfsfileshare: shared directory

192.168.0.171: IP address of client machine. We can also use the hostname instead of an IP address.
It is also possible to define the range of clients with subnet like 192.168.1.0/24

rw: Writable permission to shared folder
sync: All changes to the according file system are immediately flushed to disk; the respective write operations are being waited for.
no_root_squash: By default any file request made by user root on the client machine is treated as by user nobody on the server.
(Exactly which UID the request is mapped to depends on the UID of user “nobody” on the server, not the client)
If no_root_squash is selected, then root on the client machine will have the same level of access to the files on the system as root on the server.

You can get to know all the option in the man page man exports or here.

Export the shared directories using the following command.
exportfs -r

Extra Parameters:

exportfs -v: Displays a list of shares files and export options on a server.
exportfs -a: Exports all directories listed in /etc/exports.
exportfs -u: UnExport one or more directories.
exportfs -r: ReExport all directories after modifying /etc/exports.

After configuring NFS server, we need to mount that shared directory in the NFS client.

Configure Firewall

We need to configure the firewall on NFS server to allow NFS client to access NFS share.
To do that, run the following commands on the NFS server.
firewall-cmd --permanent --add-service mountd
firewall-cmd --permanent --add-service rpc-bind
firewall-cmd --permanent --add-service nfs
firewall-cmd --reload

Configure and Install NFS client

We need to install NFS packages on NFS client to mount a remote NFS share.

Install NFS packages using below command.
yum install -y nfs-utils

Check NFS Share

Before mounting NFS share, I request you to check NFS shares available on NFS server by running the following command on NFS client.

Replace the IP Address with your NFS server IP Address or hostname
showmount -e 192.168.0.170

Output:
Export list for 192.168.0.170:
/nfsfileshare 192.168.0.171

As per the output, /nfsfileshare is available on NFS server (192.168.0.170) for NFS client (192.168.0.171)

Extras:
showmount -e : Shows the available shares on your local machine (NFS Server).
showmount -e <server-ip or hostname>: Lists the available shares on  remote server

Mount NFS Share

Now, create a directory on NFS client to mount NFS share /nfsfileshare which we have created in NFS server
mkdir /mnt/nfsfileshare

Use below command to mount a NFS share /nfsfileshare from NFS server 192.168.0.170 in

/mnt/nfsfileshare on NFS client
mount 192.168.0.170:/nfsfileshare /mnt/nfsfileshare

Verify mounted share on NFS client using mount command.
mount | grep nfs

Output:
ssunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
nfsd on /proc/fs/nfsd type nfsd (rw,relatime)
192.168.0.170:/nfsfileshare on /mnt/nfsfileshare type nfs4 (rw,relatime,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.171,local_lock=none,addr=192.168.0.170)

Also, you can use the df -hT command to check the mounted NFS share.
df -hT

Output:

Filesystem                 Type      Size  Used Avail Use% Mounted on
/dev/mapper/centos-root    xfs        50G  1.2G   49G   3% /
devtmpfs                   devtmpfs  485M     0  485M   0% /dev
tmpfs                      tmpfs     496M     0  496M   0% /dev/shm
tmpfs                      tmpfs     496M  6.7M  490M   2% /run
tmpfs                      tmpfs     496M     0  496M   0% /sys/fs/cgroup
/dev/mapper/centos-home    xfs        47G   33M   47G   1% /home
/dev/sda1                  xfs      1014M  154M  861M  16% /boot
tmpfs                      tmpfs     100M     0  100M   0% /run/user/0
192.168.0.170:/nfsfileshare nfs4       50G  1.2G   49G   3% /mnt/nfsfileshare

Create a file on the mounted directory to verify the read and write access on NFS share.
touch /mnt/nfsfileshare/test

If the above command returns no error, you have working NFS setup.

Automount NFS Shares

To mount the shares automatically on every reboot, you would need to modify /etc/fstab file of your NFS client.

vi /etc/fstab

Add an entry something like below.
#
# /etc/fstab
# Created by anaconda on Wed Jan 17 12:04:02 2018
#
# Accessible filesystems, by reference, are maintained under '/dev/disk'
# See man pages fstab(5), findfs(8), mount(8) and/or blkid(8) for more info
#
/dev/mapper/centos-root /                       xfs     defaults        0 0
UUID=60a496d0-69f4-4355-aef0-c31d688dda1b /boot                   xfs     defaults        0 0
/dev/mapper/centos-home /home                   xfs     defaults        0 0
/dev/mapper/centos-swap swap                    swap    defaults        0 0
192.168.0.170:/nfsfileshare /mnt/nfsfileshare    nfs     nosuid,rw,sync,hard,intr  0  0

Save and close the file.

Reboot the client machine and check whether the share is automatically mounted or not.

reboot

Verify the mounted share on NFS client using mount command.

mount | grep nfs

Output:
sunrpc on /var/lib/nfs/rpc_pipefs type rpc_pipefs (rw,relatime)
192.168.0.170:/nfsfileshare on /mnt/nfsfileshare type nfs4 (rw,nosuid,relatime,sync,vers=4.1,rsize=131072,wsize=131072,namlen=255,hard,proto=tcp,port=0,timeo=600,retrans=2,sec=sys,clientaddr=192.168.0.171,local_lock=none,addr=192.168.0.170)

If you want to unmount that shared directory from your NFS client after you are done with the file sharing, you can unmount that particular directory using umount command.

umount /mnt/nfsfileshare

Conclusion

You have set up NFS Server and NFS Client on CentOS 8 / RHEL 8 successfully.
If you wish not to use static mounts, you can configure AutoFS on CentOS 7 to mount NFS share only when a user accesses them

Location of ASM tablespace files

Login as "grid"

export ORACLE_HOME=/u01/app/12.1.0/grid --this is also usual installation directory

export ORACLE_SID=+ASM

cd $ORACLE_HOME/bin

./asmcmd -p

ASMCMD>ls
+DATA01
+DATA02
ASMCMD>cd +DATA01

How to Fix Yum Error: database disk image is malformed

Possible Cause For Error

Yum is a well known tool used for installing packages on RPM based Linux distributions. It is most widely used package manager on Redhat Enterprise Linux, CentOS and Fedora operating systems.
Any other operating system derived from these distributions also use Yum for the installation, update or removal of software packages.
If you are encountering following error message while updating or installing any packages, its obvious that something bad has happened to Yum.

Error: database disk image is malformed

It usually occurs once “yum update” process is interrupted, quite possible that your system went powered off during  “yum update” process and it left yum database in a corrupt state, all we need to do is to clean and repair yum database to fix the error.

Fixing this Yum’s error

There are few steps to resolve this error, all pretty simple.

Clear Yum Caches

Clear yum cache by running the following command. It will ensure to remove entries from /var/cache/yum/ directory
Please note that it will not effect any of your currently installed packages. Pretty safe command to run

yum clean all

Clear Yum’s XML meta data by running the following command:

yum clean metadata 

Run following command to clear the cached files for database.

yum clean dbcache

It is important to clear Yum’s make cache, use following command:

 yum makecache 

Rebuild RPM Database

As the last step, make sure to rebuild your system’s RPM database. Run following two commands to achieve this.

mv /var/lib/rpm/__db* /tmp 

rpm –rebuilddb 

OIM/OAM 12c RCU Creation failed for Oracle Database 12c

Issue:

When we run RCU utility to create OIM/OAM Schemas, the following error is encountered
Mon Jul 24 13:36:02.696 UTC 2018 ERROR assistants.rcu.backend.task.PrereqTask: oracle.sysman.assistants.rcu.backend.task.PrereqTask::execute: Prereq Evaluation Failed
oracle.sysman.assistants.rcu.backend.validation.PrereqException:
ERROR – RCU-6083 Prerequisite check failed for selected component:
CAUSE – RCU-6083 Prerequisite check failed for selected component.
ACTION – RCU-6083 Refer to the RCU logs for additional details. Make sure that the prerequisite requirements are met.
Refer to RCU log at /tmp/RCU2018-07-24_13-30_1931354565/logs/rcu.log for details.
at oracle.sysman.assistants.rcu.backend.validation.PrereqEvaluator.executePrereqTask(PrereqEvaluator.java:713)
at oracle.sysman.assistants.rcu.backend.task.PrereqTask.execute(PrereqTask.java:68)
at oracle.sysman.assistants.rcu.backend.task.ActualTask.run(TaskRunner.java:346)
at java.lang.Thread.run(Thread.java:748)
Mon Jul 24 13:36:02.697 UTC 2018 ERROR assistants.rcu.backend.task.ActualTask: oracle.sysman.assistants.rcu.backend.task.ActualTask::run: RCU Operation Failed
oracle.sysman.assistants.common.task.TaskExecutionException:
ERROR – RCU-6083 Prerequisite check failed for selected component:
CAUSE – RCU-6083 Prerequisite check failed for selected component.
ACTION – RCU-6083 Refer to the RCU logs for additional details. Make sure that the prerequisite requirements are met.
Refer to RCU log at /tmp/RCU2018-09-24_13-30_1931354565/logs/rcu.log for details.
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
Error: Views/Synonyms required for XA transaction support are missing in this Database 12c.
These views/synonyms are required by the OIM Schema.
Action: Refer Oracle Database Administrator’s Guide to install XA transaction recovery views/synonyms using the script xaview.sql. Contact your DBA.
For Database12c CDB config: execute xaview.sql from PDB SYS user
For Database12c NON-CDB config: execute xaview.sql from CDB SYS user

Resolution:

Step 1
Login to your database as sys or any other user with sysdba privileges and run below two scripts

@$ORACLE_HOME/javavm/install/initxa.sql
@$ORACLE_HOME/rdbms/admin/xaview.sql

Step 2
Along with the above views, make sure following parameters are tuned as per Oracle's recommendation
PROCESSES (min > 300)
NLS_LENGTH_SEMANTICS (BYTE)
SHARED_POOL_SIZE ( min > 200MB)
SGA_MAX_SIZE (min > 200MB)
DB_BLOCK_SIZE (min > 8k)
OPEN_CURSORS (min > 1000)

ORA-01078: failure in processing system parameters LRM-00109: could not open parameter file - Fixed

Requirement - When starting up database following error occurs:

SQL> startup
ORA-01078: failure in processing system parameters
LRM-00109: could not open parameter file ‘/opt/oracle/product/11.2.0/dbhome_1/dbs/inittest01.ora’

I have faced this problem, while setting up an new image of VMware. I am starting server and I got this error.
What does this mean –

Reason-
Database start using spfile (Default)
In Unix default path is $ORACLE_HOME/dbs
If spfile is not present it looks for pfile at same path. If pfile is also not present it will give above message.

Implementation -
If you have spfile then you can copy default values from spfile to pfile and create pfile. But what if you don’t have spfile
You have to create an pfile

How to create pfile –

When database starts it writes list of non default parameters in alert log files. We can use these values to create a pfile and start the database.

Find your Alert log file and open it, this is Database Management Software Oracle 11g
Here you will see entry like this:

processes = 150
sga_target = 512M
control_files = “/opt/oracle/test01/dbs/control01.ctl”
control_files = “/opt/oracle/test01/dbs/control02.ctl”
control_files = “/opt/oracle/test01/dbs/control03.ctl”
db_block_size = 8192
compatible = “10.2.0.1.0”
log_archive_dest_1 = “LOCATION=/opt/oracle/test01/archive”
log_archive_dest_state_1 = “ENABLE”
log_archive_format = “%t_%s_%r.dbf”
log_archive_max_processes= 10
log_checkpoint_interval = 9999
log_checkpoint_timeout = 0
db_file_multiblock_read_count= 16
db_recovery_file_dest = “/opt/oracle/test01/flash_recovery_area”
db_recovery_file_dest_size= 2G
undo_management = “AUTO”
undo_tablespace = “UNDOTBS1”
remote_login_passwordfile= “EXCLUSIVE”
db_domain = “agilis.com”
job_queue_processes = 32
core_dump_dest = “/opt/oracle/test01/diag/cdump”
audit_file_dest = “/opt/oracle/test01/adump”
open_links = 10
db_name = “test01”
open_cursors = 500
optimizer_index_cost_adj = 20
optimizer_index_caching = 90
pga_aggregate_target = 128M
diagnostic_dest = “/opt/oracle/test01/diag”

Create pfile using these values:
$ cd /opt/oracle/product/11.2.0/dbhome_1/dbs/
vi inittest01.ora

Copy non default parameter values from alert log in this file and save it. This is your pfile,
Start the database using this pfile

Start the Database using Pfile:

[oracle@ dbs]$ export ORACLE_SID=test01
 [oracle@ dbs]$ sqlplus sys as sysdba
 SQL*Plus: Release 11.2.0.2.0 Production on Fri Jun 24 15:53:16 2011
 Copyright (c) 1982, 2010, Oracle. All rights reserved.
 Enter password:
 Connected to an idle instance.
 SQL> startup pfile=’$ORACLE_HOME/dbs/inittest01.ora’
ORACLE instance started.
 Total System Global Area 534462464 bytes
 Fixed Size 2228200 bytes
 Variable Size 163577880 bytes
 Database Buffers 360710144 bytes
 Redo Buffers 7946240 bytes
 Database mounted.
 Database opened.
 SQL>

Create spfile from pfile:

SQL> create spfile from pfile=’$ORACLE_HOME/dbs/inittest01.ora’;
File created.

Shutdown the database and restart it will use spfile (Default) and problem is solved.

How to fix: ERROR 1045 (28000): Access denied for user ‘root’@’localhost’ (using password: YES)

Open your terminal and type mysql -u root -p Enter your password. Hopefully your MySQL is logged in now.