Slide 1

Slide 1 text

Autonomous Health Framework: How to Use Your Database "Swiss Army Knife” (Without Poking an Eye Out)

Slide 2

Slide 2 text

Sean Scott 25+ years working with Oracle technology
 UTOUG Board ⁘ RAC SIG Board
 Oracle OpenWorld ⁘ Collaborate/IOUG ⁘ Regional UG RAC/MAA ⁘ DR/HA ⁘ TFA/AHF ⁘ Exadata/ODA
 Upgrades ⁘ Migration ⁘ Cloud ⁘ Automation DevOps ⁘ Infrastructure as Code
 Containers ⁘ Virtualization

Slide 3

Slide 3 text

Why you need AHF

Slide 4

Slide 4 text

Why you need AHF • AHF diagnostic collections required by MOS for some SR • Diagnostic collections accelerate SR resolution • Cluster-aware ADR log inspection and management • Advanced system and log monitoring • Incident control and notification • Connect to MOS • SMTP, REST APIs

Slide 5

Slide 5 text

Why you need AHF • Built-in Oracle tools: • ORAchk/EXAchk • OS Watcher • Cluster Verification Utility (CVU) • Hang Manager • Diagnostic Assistant

Slide 6

Slide 6 text

Why you need AHF • Integrated with: • Database • ASM and Clusterware • Automatic Diagnostic Repository (ADR) • Grid Infrastructure Management Repository (GIMR) • Cluster Health Advisor (CHA) & Cluster Health Monitor (CHM) • Enterprise Manager

Slide 7

Slide 7 text

Why you need AHF • Cluster aware: • Run commands for all, some nodes • Cross-node configuration and file inspection • Central management for ADR • Consolidated diagnostic collection

Slide 8

Slide 8 text

Why you need AHF • Over 800 health checks • 400 identified as critical/failures • Severe problem check daily: 2AM • All known problem check weekly: 3AM Sunday • Auto-generates a collection when problems detected • Everything required to diagnose & resolve • Results delivered to the notification email

Slide 9

Slide 9 text

AHF is FREE!

Slide 10

Slide 10 text

Download AHF

Slide 11

Slide 11 text

Download AHF • AHF Parent Page: Doc ID 2550798.1 • AHF On-Premises: Doc ID 2832630.1 (New) • Linux, ZLinux • Solaris x86/SPARC64 • HPUX • AIX 6/7 • Win 64-bit • AHF Gen-2 Cloud: Doc ID 2832594.1 (New)

Slide 12

Slide 12 text

Download AHF • Major release each quarter • Typically follows DBRU schedule • Naming convention is year, quarter, release: YY.Q.R • 21.4.0, 21.4.1 • Intermediate releases are common!

Slide 13

Slide 13 text

Install AHF

Slide 14

Slide 14 text

Types of installs: Daemon or root • Recommended method • Cluster awareness • Full AHF capabilities • Includes compliance checks • Enables notifications • Automatic diagnostic collection when issues are detected • May conflict with existing AHF/TFA installations

Slide 15

Slide 15 text

Types of installs: Local or non-root • Reduced feature set • No automatic or remote diagnostics, collections • Limited file visibility (must be readable by Oracle home owner) • /var/log/messages • Some Grid Infrastructure logs • May co-exist with Daemon installations • No special pre-install considerations

Slide 16

Slide 16 text

Don’t take shortcuts Don’t follow Oracle’s installation instructions

Slide 17

Slide 17 text

Install AHF • Oracle’s instructions work when things are perfect • Systems are rarely perfect! • AHF and TFA are known for certain… ahem, peculiarities

Slide 18

Slide 18 text

A brief history lesson… • There are two flavors of TFA • A version downloaded from MOS • A version included in Grid Infrastructure install & patches • GI version is not fully featured • GI and MOS versions can interfere, conflict

Slide 19

Slide 19 text

Recommendation: Remove existing AHF/TFA before install

Slide 20

Slide 20 text

TFA pre-installation checks # Uninstall TFA (as root) tfactl uninstall # Check for existing AHF/TFA installs which tfactl which ahfctl

Slide 21

Slide 21 text

TFA pre-installation checks # Kill any leftover processes (and make sure they stay that way!) pkill "oswbb|OSWatcher*|toolstatus|tfactl" sleep 300 ps -ef | egrep -i "oswbb|OSWatcher|toolstatus|tfactl" # Check for leftover, conflicting processes ps -ef | egrep -i "oswbb|OSWatcher|ahf|tfa|prw|toolstatus"

Slide 22

Slide 22 text

TFA pre-installation checks # Locate leftover setup configuration files find / -name tfa_setup.txt # Verify files are removed find / -name tfactl find / -name startOSWbb.sh

Slide 23

Slide 23 text

TFA pre-installation checks # Remove legacy/existing AHF/TFA installations for d in $(find / -name uninstalltfa) do cd $(dirname $d) ./tfactl uninstall # cd .. && rm -fr . done # Insure ALL AHF/TFA processes are stopped/inactive prior to uninstall # PERFORM THIS STEP ON ALL NODES

Slide 24

Slide 24 text

No content

Slide 25

Slide 25 text

Installation—unzip [root@vna1 ahf]# ls -l total 407412 -rw-r--r--. 1 oracle dba 417185977 Feb 1 23:17 AHF-LINUX_v21.4.1.zip [root@vna1 ahf]# unzip AHF-LINUX_v21.4.1.zip Archive: AHF-LINUX_v21.4.1.zip inflating: README.txt inflating: ahf_setup extracting: ahf_setup.dat inflating: oracle-tfa.pub

Slide 26

Slide 26 text

Installation

Slide 27

Slide 27 text

Post-installation sanity checks

Slide 28

Slide 28 text

Command line tools: ahfctl and tfactl $ tfactl - or - $ tfactl tfactl> $ tfactl help $ tfactl help $ ahfctl - or - $ ahfctl ahfctl> $ ahfctl help $ ahfctl help

Slide 29

Slide 29 text

Post-install checks ahfctl version tfactl status ahfctl statusahf tfactl toolstatus tfactl print hosts tfactl print components tfactl print protocols tfactl print config -node all

Slide 30

Slide 30 text

No content

Slide 31

Slide 31 text

status vs statusahf [root@node1 ~]# tfactl status .---------------------------------------------------------------------------------------------. | Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status | +-------+---------------+-------+------+------------+----------------------+------------------+ | node1 | RUNNING | 28883 | 5000 | 21.4.1.0.0 | 21410020220111213353 | COMPLETE | | node2 | RUNNING | 30339 | 5000 | 21.4.1.0.0 | 21410020220111213353 | COMPLETE | '-------+---------------+-------+------+------------+----------------------+------------------' [root@node1 ~]#

Slide 32

Slide 32 text

[root@node1 ~]# tfactl statusahf .---------------------------------------------------------------------------------------------. | Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status | +-------+---------------+-------+------+------------+----------------------+------------------+ | node1 | RUNNING | 28883 | 5000 | 21.4.1.0.0 | 21410020220111213353 | COMPLETE | | node2 | RUNNING | 30339 | 5000 | 21.4.1.0.0 | 21410020220111213353 | COMPLETE | '-------+---------------+-------+------+------------+----------------------+------------------' ------------------------------------------------------------ Master node = node1 orachk daemon version = 214100 Install location = /opt/oracle.ahf/orachk Started at = Wed Feb 02 20:50:12 GMT 2022 Scheduler type = TFA Scheduler Scheduler PID: 28883 ... status vs statusahf

Slide 33

Slide 33 text

------------------------------------------------------------ ID: orachk.autostart_client_oratier1 ------------------------------------------------------------ AUTORUN_FLAGS = -usediscovery -profile oratier1 -dball -showpass -tag autostart_client_oratier1 -readenvconfig COLLECTION_RETENTION = 7 AUTORUN_SCHEDULE = 3 2 * * 1,2,3,4,5,6 ------------------------------------------------------------ ------------------------------------------------------------ ID: orachk.autostart_client ------------------------------------------------------------ AUTORUN_FLAGS = -usediscovery -tag autostart_client -readenvconfig COLLECTION_RETENTION = 14 AUTORUN_SCHEDULE = 3 3 * * 0 ------------------------------------------------------------ Next auto run starts on Feb 03, 2022 02:03:00 ID:orachk.AUTOSTART_CLIENT_ORATIER1 statusahf option in tfactl is deprecated and will be removed in AHF 22.1.0. Please start using ahfctl for statusahf, Example: ahfctl statusahf status vs statusahf

Slide 34

Slide 34 text

Failed installs and upgrades Common issues

Slide 35

Slide 35 text

Warning remains after a successful upgrade [root@node1 ahf]# ahfctl statusahf WARNING - AHF Software is older than 180 days. Please consider upgrading AHF to the latest version using ahfctl upgrade. .---------------------------------------------------------------------------------------------. | Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status | +-------+---------------+-------+------+------------+----------------------+------------------+ | node1 | RUNNING | 28883 | 5000 | 21.4.1.0.0 | 21410020220111213353 | COMPLETE | | node2 | RUNNING | 24554 | 5000 | 21.4.1.0.0 | 21410020220111213353 | COMPLETE | '-------+---------------+-------+------+------------+----------------------+------------------' • Run ahfctl syncpatch

Slide 36

Slide 36 text

Not all nodes appear after upgrade [root@node1 ahf]# tfactl status .---------------------------------------------------------------------------------------------. | Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status | +-------+---------------+-------+------+------------+----------------------+------------------+ | node1 | RUNNING | 28883 | 5000 | 21.4.1.0.0 | 21410020220111213353 | COMPLETE | '-------+---------------+-------+------+------------+----------------------+------------------' [root@node1 ahf]# • Run ahfctl syncnodes

Slide 37

Slide 37 text

Not all nodes appear after upgrade [root@node1 ahf]# tfactl syncnodes Current Node List in TFA : 1. node1 2. node2 Node List in Cluster : 1. node1 2. node2 Node List to sync TFA Certificates : 1 node2 Do you want to update this node list? Y|[N]: Syncing TFA Certificates on node2 : TFA_HOME on node2 : /opt/oracle.ahf/tfa ...

Slide 38

Slide 38 text

Not all nodes appear after upgrade (cont) ... TFA_HOME on node2 : /opt/oracle.ahf/tfa DATA_DIR on node2 : /opt/oracle.ahf/data/node2/tfa Shutting down TFA on node2... Copying TFA Certificates to node2... Copying SSL Properties to node2... Sleeping for 5 seconds... Starting TFA on node2... .---------------------------------------------------------------------------------------------. | Host | Status of TFA | PID | Port | Version | Build ID | Inventory Status | +-------+---------------+-------+------+------------+----------------------+------------------+ | node1 | RUNNING | 28883 | 5000 | 21.4.1.0.0 | 21410020220111213353 | COMPLETE | | node2 | RUNNING | 30339 | 5000 | 21.4.1.0.0 | 21410020220111213353 | COMPLETE | '-------+---------------+-------+------+------------+----------------------+------------------' [root@node1 ahf]#

Slide 39

Slide 39 text

OS Watcher not managed by AHF/TFA message • Legacy TFA install present • OS Watcher process running during install/upgrade • Multiple install.properties or tfa_setup.txt files • Check logs & permissions • Reinstall [root@node1 ahf]# tfactl toolstatus | grep oswbb | | oswbb | 8.3.2 | NOT MANAGED BY TFA | | | oswbb | 8.3.2 | RUNNING |

Slide 40

Slide 40 text

Installation and upgrade issues • Post-installation troubleshooting: • ahfctl stopahf; ahfctl startahf • tfactl stop; tfactl start • tfactl status • ahfctl statusahf • tfactl toolstatus • tfactl syncnodes • ahfctl syncpatch

Slide 41

Slide 41 text

Installation and upgrade issues • Post-installation troubleshooting: • tfactl diagnosetfa • Create an SR and upload result to MOS

Slide 42

Slide 42 text

Installation and upgrade issues

Slide 43

Slide 43 text

Installation and upgrade issues

Slide 44

Slide 44 text

Recommendation: Post installation configurations

Slide 45

Slide 45 text

Move the repository to shared storage!

Slide 46

Slide 46 text

RAC: Move the repository to shared storage Local Repository Local Repository Files needed by MOS

Slide 47

Slide 47 text

RAC: Move the repository to shared storage Shared Repository Files needed by MOS

Slide 48

Slide 48 text

Update repository location tfactl> print repository .--------------------------------------------------------. | node1 | +----------------------+---------------------------------+ | Repository Parameter | Value | +----------------------+---------------------------------+ | Location | /opt/oracle.ahf/data/repository | | Maximum Size (MB) | 10240 | | Current Size (MB) | 11 | | Free Size (MB) | 10229 | | Status | OPEN | '----------------------+---------------------------------'

Slide 49

Slide 49 text

Update repository location tfactl> set repositorydir=/some/directory/repository Successfully changed repository .-------------------------------------------------------------. | Repository Parameter | Value | +---------------------------+---------------------------------+ | Old Location | /opt/oracle.ahf/data/repository | | New Location | /some/directory/repository | | Current Maximum Size (MB) | 10240 | | Current Size (MB) | 0 | | Status | OPEN | ‘---------------------------+---------------------------------' # Repository commands are applied only on the local node

Slide 50

Slide 50 text

Configure email notification

Slide 51

Slide 51 text

Set email notifications [root@node1 ~]# tfactl set [email protected] Successfully set [email protected] .---------------------------------------------------------------------------. | node1 | +----------------------------------------------+----------------------------+ | Configuration Parameter | Value | +----------------------------------------------+----------------------------+ | Notification Address ( notificationAddress ) | [email protected] | '----------------------------------------------+----------------------------'

Slide 52

Slide 52 text

Set email notifications [root@node1 ~]# tfactl print smtp .---------------------------. | SMTP Server Configuration | +---------------+-----------+ | Parameter | Value | +---------------+-----------+ | smtp.auth | false | | smtp.port | 25 | | smtp.from | tfa | | smtp.cc | - | | smtp.password | ******* | | smtp.ssl | false | | smtp.debug | true | | smtp.user | - | | smtp.host | localhost | | smtp.bcc | - | | smtp.to | - | '---------------+-----------' View SMTP settings: tfactl print smtp

Slide 53

Slide 53 text

Set email notifications [root@node1 ~]# tfactl set smtp .---------------------------. | SMTP Server Configuration | +---------------+-----------+ | Parameter | Value | +---------------+-----------+ | smtp.password | ******* | | smtp.debug | true | | smtp.user | - | | smtp.cc | - | | smtp.port | 25 | | smtp.from | tfa | | smtp.bcc | - | | smtp.to | - | | smtp.auth | false | | smtp.ssl | false | | smtp.host | localhost | ‘---------------+-----------' Enter the SMTP property you want to update : smtp.host Configure SMTP settings: tfactl set smtp Opens an interactive dialog

Slide 54

Slide 54 text

Set email notifications Enter the SMTP property you want to update : smtp.host Enter value for smtp.host : 127.0.0.1 SMTP Property smtp.host updated with 127.0.0.1 Do you want to continue ? Y|N : Y .---------------------------. | SMTP Server Configuration | +---------------+-----------+ | Parameter | Value | +---------------+-----------+ | smtp.port | 25 | | smtp.cc | - | | smtp.user | - | | smtp.password | ******* | | smtp.debug | true | | smtp.host | 127.0.0.1 | | smtp.ssl | false | ... View SMTP settings: tfactl print smtp Configure SMTP settings: tfactl set smtp

Slide 55

Slide 55 text

Set email notifications Send test email: tfactl sendmail [email protected]

Slide 56

Slide 56 text

Recommended configurations

Slide 57

Slide 57 text

Recommended configurations # Repository settings tfactl set autodiagcollect=ON # default tfactl set trimfiles=ON # default tfactl set reposizeMB= # default=10240 tfactl set rtscan=ON # default tfactl set redact=mask # default=none # Disk space monitoring tfactl set diskUsageMon=ON # default=OFF tfactl set diskUsageMonInterval=240 # Depends on activity. default=60 # Log purge tfactl set autopurge=ON # If space is slim. default=OFF tfactl set manageLogsAutoPurge=ON # default=OFF tfactl set manageLogsAutoPurgeInterval=720 # Set to 12 hours. default=60 tfactl set manageLogsAutoPurgePolicyAge=30d # default=30 tfactl set minfileagetopurge=48 # default=12

Slide 58

Slide 58 text

Recommended configurations [root@node1 ~]# tfactl print config | egrep "^\|.*\|.*\|$" | awk -F'|' '{print $2, $3}' | sort | grep -i manage Logs older than the time period will be auto purged(days[d] hours[h]) ( manageLogsAutoPurgePolicyAge ) Managelogs Auto Purge ( manageLogsAutoPurge ) OFF [root@node1 ~]# tfactl set manageLogsAutoPurge=ON Successfully set manageLogsAutoPurge=ON .-------------------------------------------------------. | node1 | +-----------------------------------------------+-------+ | Configuration Parameter | Value | +-----------------------------------------------+-------+ | Managelogs Auto Purge ( manageLogsAutoPurge ) | ON | '-----------------------------------------------+-------' [root@node1 ~]# tfactl print config | egrep "^\|.*\|.*\|$" | awk -F'|' '{print $2, $3}' | sort | grep -i manage Logs older than the time period will be auto purged(days[d] hours[h]) ( manageLogsAutoPurgePolicyAge ) Managelogs Auto Purge ( manageLogsAutoPurge ) ON [root@node1 ~]# tfactl set manageLogsAutoPurgePolicyAge=30d Successfully set manageLogsAutoPurgePolicyAge=30d .----------------------------------------------------------------------------------------------------------------. | node1 | +--------------------------------------------------------------------------------------------------------+-------+ | Configuration Parameter | Value | +--------------------------------------------------------------------------------------------------------+-------+ | Logs older than the time period will be auto purged(days[d]|hours[h]) ( manageLogsAutoPurgePolicyAge ) | 30d | '--------------------------------------------------------------------------------------------------------+-------'

Slide 59

Slide 59 text

Additional configuration options

Slide 60

Slide 60 text

View configurations Default configuration list is… unsorted :( Some configurations listed as parameters, others as descriptions :( tfactl print config tfactl print config | grep -e "^\|.*\|.*\|$" | sort tfactl print config | egrep "^\|.*\|.*\|$" | sort tfactl print config | egrep "^\|.*\|.*\|$" | \ awk -F'|' '{print $2, $3}' | sort tfactl get

Slide 61

Slide 61 text

View configurations [root@node1 ~]# tfactl print config | egrep "^\|.*\|.*\|$" | awk -F'|' '{print $2, $3}' | sort actionrestartlimit 30 Age of Purging Collections (Hours) ( minFileAgeToPurge ) 12 AlertLogLevel ALL Alert Log Scan ( rtscan ) ON Allowed Sqlticker Delay in Minutes ( sqltickerdelay ) 3 analyze OFF arc.backupmissing 1 arc.backupmissing.samples 2 arc.backup.samples 3 arc.backupstatus 1 Archive Backup Delay Minutes ( archbackupdelaymins ) 40 Auto Diagcollection ( autodiagcollect ) ON Automatic Purging ( autoPurge ) ON Automatic Purging Frequency ( purgeFrequency ) 4 Auto Sync Certificates ( autosynccertificates ) ON BaseLogPath ERROR cdb.backupmissing 1 cdb.backupmissing.samples 2 cdb.backup.samples 1 cdb.backupstatus 1 ...

Slide 62

Slide 62 text

Not all configurations can be set [root@node1 ~]# tfactl set ... autodiagcollect allow for automatic diagnostic collection when an event is observed (default ON) trimfiles allow trimming of files during diagcollection (default ON) tracelevel control the trace level of log files in /opt/oracle.ahf/data/node1/diag/tfa (default INFO for all facilities) reposizeMB= set the maximum size of diagcollection repository to MB repositorydir= set the diagcollection repository to logsize= set the maximum size of each TFA log to MB (default 50 MB) logcount= set the maximum number of TFA logs to (default 10) port= set TFA Port to maxcorefilesize= set the maximum size of Core File to MB (default 20 MB ) maxcompliancesize= set the maximum size of Compliance Index directory MB (default 150 MB ) maxcomplianceruns= set the maximum number of Compliance Runs to be stored (default 30) maxcorecollectionsize= set the maximum collection size of Core Files to MB (default 200 MB ) maxfilecollectionsize= set the maximum file collection size to MB (default 5 GB ) autopurge allow automatic purging of collections when less space is observed in repository (default OFF) autosynccertificates Manage TFA Auto Sync Certificates publicip allow TFA to run on public network

Slide 63

Slide 63 text

Not all configurations can be set [root@node1 ~]# tfactl set ... redact setting for ACR redaction smtp Update SMTP Configuration minSpaceForRTScan= Minimun space required to run RT Scanning(default 500) rtscan allow Alert Log Scanning diskUsageMon allow Disk Usage Monitoring diskUsageMonInterval= Time interval between consecutive Disk Usage Snapshot(default 60 minutes) manageLogsAutoPurge allow Manage Log Auto Purging manageLogsAutoPurgeInterval= Time interval between consecutive Managelogs Auto Purge(default 60 minutes) manageLogsAutoPurgePolicyAge= Logs older than the time period will be auto purged(default 30 days) minfileagetopurge set the age in hours for collections to be skipped by AutoPurge (default 12 Hours) tfaIpsPoolSize set the TFA IPS pool size tfaDbUtlPurgeAge set the TFA ISA Purge Age (in seconds) tfaDbUtlPurgeMode set the TFA ISA Purge Mode (simple/resource) tfaDbUtlPurgeThreadDelay set the TFA ISA Purge Thread Delay (in minutes) tfaDbUtlCrsProfileDelay set the TFA ISA Crs Profile Delay indexRecoveryMode set the Lucene index recovery mode (recreate/restore) rediscoveryInterval set the time interval for running lite rediscovery

Slide 64

Slide 64 text

Annoyances

Slide 65

Slide 65 text

Annoyances • Documentation isn’t always current • Commands, options, and syntax may not match docs • Run tfactl -h or tfactl help • Some commands are user (root, oracle, grid) specific • Regression (usually minor) • Don’t build complex automation on new features • Don’t (always) rush to upgrade to the latest version • Example: GI can’t always see/manage DB & vice-versa

Slide 66

Slide 66 text

Annoyances • The transition from tfactl to ahfctl is incomplete • Commands may be: • …available in both • …deprecated in tfactl • …new and unavailable in tfactl • …not ported to ahfctl (yet)

Slide 67

Slide 67 text

Annoyances • Date format options in commands are inconsistent • Some require quotes, some don’t, some work either way • Some take double quotes, others take single quotes • YYYY/MM/DD or YYYY-MM-DD or YYYYMMDD or … • Some take dates and times separately • Sometimes there are -d and -t flags • Some take timestamps • Some work with either, others are specific

Slide 68

Slide 68 text

However… Most commands have good help options: tfactl -h [root@node1 ~]# tfactl diagcollect -h Collect logs from across nodes in cluster Usage : /opt/oracle.ahf/tfa/bin/tfactl diagcollect [ [component_name1] [component_name2] ... [component_nameN] | [-srdc ] | [-defips]] [-sr ] [-node ] [-tag ] [-z ] [-acrlevel ] [-last | -from

Slide 69

Slide 69 text

However… Many commands (incl. complex ones) have an -example option [root@node1 ~]# tfactl diagcollect -examples Examples: /opt/oracle.ahf/tfa/bin/tfactl diagcollect Trim and Zip all files updated in the last 1 hours as well as chmos/osw data from across the cluster and collect at the initiating node Note: This collection could be larger than required but is there as the simplest way to capture diagnostics if an issue has recently occurred. /opt/oracle.ahf/tfa/bin/tfactl diagcollect -last 8h Trim and Zip all files updated in the last 8 hours as well as chmos/osw data from across the cluster and collect at the initiating node /opt/oracle.ahf/tfa/bin/tfactl diagcollect -database hrdb,fdb -last 1d -z foo Trim and Zip all files from databases hrdb & fdb in the last 1 day and collect at the initiating node ...

Slide 70

Slide 70 text

However… Many commands (incl. complex ones) have an -example option [oracle@node1 ~]$ tfactl analyze -examples Examples: /opt/oracle.ahf/tfa/bin/tfactl analyze -since 5h Show summary of events from alert logs, system messages in last 5 hours. /opt/oracle.ahf/tfa/bin/tfactl analyze -comp os -since 1d Show summary of events from system messages in last 1 day. /opt/oracle.ahf/tfa/bin/tfactl analyze -search "ORA-" -since 2d Search string ORA- in alert and system logs in past 2 days. /opt/oracle.ahf/tfa/bin/tfactl analyze -search "/Starting/c" -since 2d Search case sensitive string "Starting" in past 2 days. /opt/oracle.ahf/tfa/bin/tfactl analyze -comp osw -since 6h Show OSWatcher Top summary in last 6 hours. ...

Slide 71

Slide 71 text

Managing ADR logs

Slide 72

Slide 72 text

Managing ADR logs Report space use for database, GI logs Report space variations over time # Reporting tfactl managelogs -show usage # Show all space use in ADR tfactl managelogs -show usage -gi # Show GI space use tfactl managelogs -show usage -database # Show DB space use tfactl managelogs -show usage -saveusage # Save use for variation reports # Report space use variation tfactl managelogs -show variation -since 1d tfactl managelogs -show variation -since 1d -gi tfactl managelogs -show variation -since 1d -database

Slide 73

Slide 73 text

Managing ADR logs Purge logs in ADR across cluster nodes ALERT, INCIDENT, TRACE, CDUMP, HM, UTSCDMP, LOG All diagnostic subdirectories must be owned by dba/grid # Purge ADR files tfactl managelogs -purge -older 30d -dryrun # Estimated space saving tfactl managelogs -purge -older 30d # Purge logs > 30 days old tfactl managelogs -purge -older 30d -gi # GI only tfactl managelogs -purge -older 30d -database # Database only tfactl managelogs -purge -older 30d -database all # All databases tfactl managelogs -purge -older 30d -database SID1,SID3 tfactl managelogs -purge -older 30d -node all # All nodes tfactl managelogs -purge -older 30d -node local # Local node tfactl managelogs -purge -older 30d -node NODE1,NODE3

Slide 74

Slide 74 text

Managing ADR logs - Things to know • First-time purge can take a long time for: • Large directories • Many files • NOTE: Purge operation loops over files • Strategies for first time purge: • Delete in batches by age—365 days, 180 days, 90 days, etc. • Delete database and GI homes separately • Delete for individual SIDs, nodes

Slide 75

Slide 75 text

Managing ADR logs - File ownership • Files cannot be deleted if subdirectories under ADR_HOME are not owned by grid/oracle or oinstall/dba • One mis-owned subdirectory • No files under that ADR_HOME will be purged • Even subdirectories with correct ownership! • Depending on version • grid may not be able to delete files in database ADR_HOMEs • oracle may not be able to delete files in GI ADR_HOMEs

Slide 76

Slide 76 text

Managing ADR logs - Files not deleted when • ADR_HOME: • …schema version is mismatched • …library version is mismatched • …schema version is obsolete • …is not registered • …is for an orphaned CRS event or user • …is for an inactive listener

Slide 77

Slide 77 text

Managing ADR logs - Files not deleted when • ORACLE_SID or ORACLE_HOME not present in oratab • Duplicate ORACLE_SIDs are present in oratab • Database unique name is mismatched to its directory • Can occur during cloning operations • ADR_BASE is not set properly • $ORACLE_HOME/log/diag directory is missing • $ORACLE_HOME/log/diag/adrci_dir.mif missing • $ORACLE_HOME/log/diag/adrci_dir.mif doesn’t list ADR_BASE

Slide 78

Slide 78 text

The best commands in AHF analyze, changes, events

Slide 79

Slide 79 text

analyze # Perform system analysis of DB, ASM, GI, system, OS Watcher logs/output tfactl analyze # Options: -comp [db|asm|crs|acfs|oratop|os|osw|oswslabinfo] # default=all -type [error|warning|generic] # default=error -node [all|local|nodename] # default=all -o filename # Output to filename # Times and ranges -for "YYYY-MM-DD" -from "YYYY-MM-DD" -to "YYYY-MM-DD" -from "YYYY-MM-DD HH24:MI:SS" -to "YYYY-MM-DD HH24:MI:SS" -last 6h -last 1d

Slide 80

Slide 80 text

analyze # Perform system analysis of DB, ASM, GI, system, OS Watcher logs/output tfactl analyze # Options: -search "pattern" # Search in DB and CRS alert logs # Sets the search period to -last 1h # Override with -last xh|xd -verbose timeline file1 file2 # Shows timeline for specified files

Slide 81

Slide 81 text

analyze INFO: analyzing all (Alert and Unix System Logs) logs for the last 1440 minutes... Please wait... INFO: analyzing host: node1 Report title: Analysis of Alert,System Logs Report date range: last ~1 day(s) Report (default) time zone: GMT - Greenwich Mean Time Analysis started at: 03-Feb-2022 06:27:46 PM GMT Elapsed analysis time: 0 second(s). Configuration file: /opt/oracle.ahf/tfa/ext/tnt/conf/tnt.prop Configuration group: all Total message count: 963, from 02-Feb-2022 08:01:39 PM GMT to 03-Feb-2022 04:23:43 PM GMT Messages matching last ~1 day(s): 963, from 02-Feb-2022 08:01:39 PM GMT to 03-Feb-2022 04:23:43 PM GMT last ~1 day(s) error count: 4, from 02-Feb-2022 08:03:31 PM GMT to 02-Feb-2022 08:11:12 PM GMT last ~1 day(s) ignored error count: 0 last ~1 day(s) unique error count: 3 Message types for last ~1 day(s) Occurrences percent server name type ----------- ------- -------------------- ----- 952 98.9% node1 generic 7 0.7% node1 WARNING 4 0.4% node1 ERROR ----------- ------- 963 100.0%

Slide 82

Slide 82 text

analyze ... Unique error messages for last ~1 day(s) Occurrences percent server name error ----------- ------- ----------- ----- 2 50.0% node1 [OCSSD(30863)]CRS-1601: CSSD Reconfiguration complete. Active nodes are node1 . 1 25.0% node1 [OCSSD(2654)]CRS-1601: CSSD Reconfiguration complete. Active nodes are node1 node2 . 1 25.0% node1 [OCSSD(2654)]CRS-1601: CSSD Reconfiguration complete. Active nodes are node1 . ----------- ------- 4 100.0%

Slide 83

Slide 83 text

changes # Find changes made on the system tfactl changes # Times and ranges -for "YYYY-MM-DD" -from "YYYY-MM-DD" -to "YYYY-MM-DD" -from "YYYY-MM-DD HH24:MI:SS" -to "YYYY-MM-DD HH24:MI:SS" -last 6h -last 1d

Slide 84

Slide 84 text

changes [root@node1 ~]# tfactl changes -last 2d Output from host : node2 ------------------------------ [Feb/02/2022 20:11:16.438]: Package: cvuqdisk-1.0.10-1.x86_64 Output from host : node1 ------------------------------ [Feb/02/2022 19:57:16.438]: Package: cvuqdisk-1.0.10-1.x86_64 [Feb/02/2022 20:11:16.438]: Package: cvuqdisk-1.0.10-1.x86_64

Slide 85

Slide 85 text

events [root@node1 ~]# tfactl events -last 1d Output from host : node2 ------------------------------ Event Summary: INFO :3 ERROR :2 WARNING :0 Event Timeline: [Feb/02/2022 20:10:46.649 GMT]: [crs]: 2022-02-02 20:10:46.649 [ORAROOTAGENT(27881)]CRS-5822: Agent '/u01/app/19.3.0.0/grid/ bin/orarootagent_root' disconnected from server. Details at (:CRSAGF00117:) {0:1:3} in /u01/app/grid/diag/crs/node2/crs/trace/ ohasd_orarootagent_root.trc. [Feb/02/2022 20:11:12.856 GMT]: [crs]: 2022-02-02 20:11:12.856 [OCSSD(28472)]CRS-1601: CSSD Reconfiguration complete. Active nodes are node1 node2 . [Feb/02/2022 20:11:57.000 GMT]: [asm.+ASM2]: Reconfiguration started (old inc 0, new inc 4) [Feb/02/2022 20:28:31.000 GMT]: [db.db193h1.DB193H12]: Starting ORACLE instance (normal) (OS id: 24897) [Feb/02/2022 20:28:42.000 GMT]: [db.db193h1.DB193H12]: Reconfiguration started (old inc 0, new inc 4)

Slide 86

Slide 86 text

The best utilities in AHF alertsummary, grep, tail

Slide 87

Slide 87 text

alertsummary # Summarize events in database and ASM alert logs tfactl alertsummary [root@node1 ~]# tfactl alertsummary Output from host : node1 ------------------------------ Reading /u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- ------------------------------------------------------------------------ 02 02 2022 20:04:57 Database started ------------------------------------------------------------------------ 02 02 2022 20:07:41 Database started Summary: Ora-600=0, Ora-7445=0, Ora-700=0 ~~~~~~~ Warning: Only FATAL errors reported Warning: These errors were seen and NOT reported Ora-15173 Ora-15032 Ora-15017 Ora-15013 Ora-15326

Slide 88

Slide 88 text

grep # Find patterns in multiple files tfactl grep "ERROR" alert tfactl grep -i "error" alert,trace [root@node1 ~]# tfactl grep -i "error" alert Output from host : node1 ------------------------------ Searching 'error' in alert Searching /u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log +-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+-+- 28: PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s) 375:Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_32035.trc: 378:Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_32049.trc: 446:ERROR: /* ASMCMD */ALTER DISKGROUP ALL MOUNT 543: PAGESIZE AVAILABLE_PAGES EXPECTED_PAGES ALLOCATED_PAGES ERROR(s) 1034:Errors in file /u01/app/grid/diag/asm/+asm/+ASM1/trace/+ASM1_ora_28105.trc: ...

Slide 89

Slide 89 text

tail # Tail logs by name or pattern tfactl tail alert_ # Tail all logs matching alert_ tfactl tail alert_ORCL1.log -exact # Tail for an exact match tfactl tail -f alert_ # Follow logs(local node only) [root@node1 ~]# tfactl tail -f alert_ Output from host : node1 ------------------------------ ==> /u01/app/grid/diag/asm/+asm/+ASM1/trace/alert_+ASM1.log <== NOTE: cleaning up empty system-created directory '+DATA/vgtol7-rac-c/OCRBACKUP/backup00.ocr.274.1095654191' 2022-02-03T12:23:35.194335+00:00 NOTE: cleaning up empty system-created directory '+DATA/vgtol7-rac-c/OCRBACKUP/backup01.ocr.274.1095654191' 2022-02-03T16:23:43.602629+00:00 NOTE: cleaning up empty system-created directory '+DATA/vgtol7-rac-c/OCRBACKUP/backup01.ocr.275.1095668599' ==> /u01/app/oracle/diag/rdbms/db193h1/DB193H11/trace/alert_DB193H11.log <== TABLE SYS.WRI$_OPTSTAT_HISTHEAD_HISTORY: ADDED INTERVAL PARTITION SYS_P301 (44594) VALUES LESS THAN (TO_DATE(‘... SYS.WRI$_OPTSTAT_HISTGRM_HISTORY: ADDED INTERVAL PARTITION SYS_P304 (44594) VALUES LESS THAN (TO_DATE(‘... 2022-02-03T06:00:16.143988+00:00 Thread 1 advanced to log sequence 22 (LGWR switch) Current log# 2 seq# 22 mem# 0: +DATA/DB193H1/ONLINELOG/group_2.265.1095625353

Slide 90

Slide 90 text

Questions

Slide 91

Slide 91 text

No content