Skip to main content

Database Administration

Learning Objectives

By the end of this section, you will be able to:

  • Install, configure, and manage major database platforms (MySQL, PostgreSQL, SQL Server, MongoDB)
  • Perform essential database administration tasks for each platform
  • Troubleshoot common database issues at the system administration level
  • Implement backup and recovery procedures for different database systems
  • Monitor and optimize database performance across platforms
  • Secure database installations according to best practices
  • Choose the right database platform for specific business requirements

Introduction: The Database Administrator's Toolkit

As an MSP professional, you'll encounter various database systems in client environments. Each has unique characteristics, management tools, and troubleshooting approaches. It's like being a mechanic who needs to work on different car brands - while the basic principles are similar, each has specific procedures, tools, and common issues.

In this section, we'll dive deep into the four most common database systems you'll manage: MySQL, PostgreSQL, Microsoft SQL Server, and MongoDB. You'll learn platform-specific administration, troubleshooting, and optimization techniques that will make you effective across diverse client environments.

Overview of Major Database Platforms

Platform Comparison and Use Cases

DatabaseTypeBest ForTypical CostMarket Share
MySQLRelationalWeb applications, small-medium businessesFree (Open Source)39%
PostgreSQLRelationalComplex applications, data warehousingFree (Open Source)13%
SQL ServerRelationalWindows environments, enterprises₹50,000-5,00,000/year22%
MongoDBNoSQL DocumentModern web apps, big data, APIsFree + Commercial tiers8%

When MSPs Encounter Each Platform

MySQL:

  • WordPress websites, e-commerce platforms (WooCommerce, Magento)
  • Small business applications, startup web services
  • LAMP stack environments (Linux, Apache, MySQL, PHP)

PostgreSQL:

  • Enterprise applications requiring complex queries
  • Data analytics and business intelligence systems
  • Applications needing advanced data types and extensions

SQL Server:

  • Microsoft-centric environments (.NET applications)
  • Enterprise resource planning (ERP) systems
  • Business intelligence and reporting services

MongoDB:

  • Modern web applications with flexible data structures
  • Content management systems, user profile systems
  • IoT data collection, real-time analytics

MySQL Administration

Installation and Configuration

Platform Availability: Linux, Windows, macOS Default Port: 3306 Configuration File: my.cnf (Linux/Mac) or my.ini (Windows)

Initial Setup and Security

Ubuntu/Debian Installation:

# Update package repository
sudo apt update

# Install MySQL server
sudo apt install mysql-server

# Secure installation
sudo mysql_secure_installation

Security Configuration Checklist:

# Remove anonymous users: Yes
# Disallow root login remotely: Yes
# Remove test database: Yes
# Reload privilege tables: Yes

Create Administrative User:

-- Connect as root
sudo mysql -u root -p

-- Create admin user for daily operations
CREATE USER 'admin'@'localhost' IDENTIFIED BY 'SecurePassword123!';
GRANT ALL PRIVILEGES ON *.* TO 'admin'@'localhost' WITH GRANT OPTION;
FLUSH PRIVILEGES;

Essential MySQL Configuration

Key Configuration Parameters (/etc/mysql/mysql.conf.d/mysqld.cnf):

[mysqld]
# Basic Settings
bind-address = 127.0.0.1 # Only local connections (secure default)
port = 3306
max_connections = 151 # Maximum concurrent connections

# Performance Settings
innodb_buffer_pool_size = 1G # 70-80% of available RAM
query_cache_type = 1 # Enable query caching
query_cache_size = 64M # Cache size for repeated queries

# Logging Settings
general_log = 0 # Disable for performance (enable for debugging)
slow_query_log = 1 # Enable slow query logging
slow_query_log_file = /var/log/mysql/slow.log
long_query_time = 2 # Log queries taking more than 2 seconds

# Binary Logging (for replication/backup)
log-bin = mysql-bin
binlog_format = ROW
expire_logs_days = 7 # Keep logs for 7 days

Apply Configuration Changes:

# Test configuration
sudo mysql --help --verbose | head -1

# Restart MySQL service
sudo systemctl restart mysql

# Verify service status
sudo systemctl status mysql

MySQL Backup and Recovery

Complete Database Backup:

# Backup single database
mysqldump -u admin -p --single-transaction --routines --triggers ecommerce_db > ecommerce_backup.sql

# Backup all databases
mysqldump -u admin -p --all-databases --single-transaction > all_databases_backup.sql

# Compressed backup to save space
mysqldump -u admin -p --single-transaction ecommerce_db | gzip > ecommerce_backup.sql.gz

Automated Daily Backup Script:

#!/bin/bash
# /opt/mysql_backup.sh

# Variables
BACKUP_DIR="/backup/mysql"
DATE=$(date +%Y%m%d_%H%M%S)
MYSQL_USER="backup_user"
MYSQL_PASSWORD="BackupPass123!"
DATABASES="ecommerce_db crm_db inventory_db"

# Create backup directory
mkdir -p $BACKUP_DIR

# Backup each database
for db in $DATABASES; do
echo "Backing up database: $db"
mysqldump -u $MYSQL_USER -p$MYSQL_PASSWORD \
--single-transaction \
--routines \
--triggers \
$db | gzip > $BACKUP_DIR/${db}_${DATE}.sql.gz
done

# Remove backups older than 30 days
find $BACKUP_DIR -name "*.sql.gz" -mtime +30 -delete

echo "Backup completed: $(date)"

Schedule with Cron:

# Edit crontab
crontab -e

# Add daily backup at 2 AM
0 2 * * * /opt/mysql_backup.sh >> /var/log/mysql_backup.log 2>&1

Database Recovery:

# Restore single database
mysql -u admin -p ecommerce_db ``<`` ecommerce_backup.sql

# Restore compressed backup
gunzip ``<`` ecommerce_backup.sql.gz | mysql -u admin -p ecommerce_db

# Restore all databases
mysql -u admin -p ``<`` all_databases_backup.sql

Common MySQL Troubleshooting

Problem 1: MySQL Won't Start

Symptoms: Service fails to start, error logs show issues

Troubleshooting Steps:

# Check service status
sudo systemctl status mysql

# Check error logs
sudo tail -f /var/log/mysql/error.log

# Check disk space (common cause)
df -h

# Check MySQL configuration syntax
sudo mysqld --help --verbose > /dev/null

Common Causes and Solutions:

  • Disk space full: Clear logs, add storage
  • Configuration errors: Check my.cnf syntax
  • Permission issues: Fix MySQL data directory ownership
  • Port conflicts: Change port or stop conflicting service

Fix Permission Issues:

# Fix MySQL data directory ownership
sudo chown -R mysql:mysql /var/lib/mysql
sudo chmod 750 /var/lib/mysql

Problem 2: Slow Query Performance

Enable Slow Query Log:

-- Enable slow query logging
SET GLOBAL slow_query_log = 1;
SET GLOBAL long_query_time = 1;
SET GLOBAL slow_query_log_file = '/var/log/mysql/slow.log';

Analyze Slow Queries:

# Install MySQL utilities
sudo apt install percona-toolkit

# Analyze slow query log
pt-query-digest /var/log/mysql/slow.log

Common Performance Issues:

  • Missing indexes: Add indexes on frequently queried columns
  • Full table scans: Optimize WHERE clauses
  • Large result sets: Add LIMIT clauses
  • Inefficient JOINs: Rewrite queries, add proper indexes

Add Indexes for Common Queries:

-- Index on frequently searched columns
CREATE INDEX idx_customer_email ON customers(email);
CREATE INDEX idx_order_date ON orders(order_date);
CREATE INDEX idx_product_category ON products(category);

-- Composite index for multi-column searches
CREATE INDEX idx_customer_city_date ON customers(city, registration_date);

Problem 3: Connection Issues

Too Many Connections Error:

-- Check current connections
SHOW PROCESSLIST;

-- Check maximum connections
SHOW VARIABLES LIKE 'max_connections';

-- Increase connection limit
SET GLOBAL max_connections = 200;

Make permanent change in configuration:

# /etc/mysql/mysql.conf.d/mysqld.cnf
[mysqld]
max_connections = 200

Access Denied Errors:

-- Check user privileges
SELECT User, Host FROM mysql.user;
SHOW GRANTS FOR 'username'@'hostname';

-- Fix user permissions
GRANT SELECT, INSERT, UPDATE, DELETE ON database_name.* TO 'username'@'hostname';
FLUSH PRIVILEGES;

MySQL Performance Optimization

Memory Configuration:

# Optimize for available RAM (example for 8GB server)
[mysqld]
innodb_buffer_pool_size = 5G # 60-70% of total RAM
key_buffer_size = 256M # For MyISAM tables
query_cache_size = 128M # Query result caching
tmp_table_size = 64M # Temporary table size
max_heap_table_size = 64M # In-memory table size

Connection Optimization:

[mysqld]
max_connections = 200 # Based on expected load
thread_cache_size = 50 # Reuse connection threads
connect_timeout = 10 # Connection timeout
wait_timeout = 600 # Close idle connections

Query Optimization Tools:

-- Enable performance schema
SET GLOBAL performance_schema = ON;

-- Check query execution plans
EXPLAIN SELECT * FROM orders WHERE customer_id = 1001;

-- Analyze table statistics
ANALYZE TABLE customers, orders, products;

PostgreSQL Administration

Installation and Configuration

Ubuntu/Debian Installation:

# Install PostgreSQL
sudo apt update
sudo apt install postgresql postgresql-contrib

# Check service status
sudo systemctl status postgresql

Initial Setup:

# Switch to postgres user
sudo -i -u postgres

# Access PostgreSQL prompt
psql

# Create administrative user
CREATE USER admin WITH PASSWORD 'SecurePassword123!';
ALTER USER admin CREATEDB CREATEROLE;

# Create application database
CREATE DATABASE ecommerce_db OWNER admin;

# Exit psql
\q

PostgreSQL Configuration

Main Configuration Files:

  • postgresql.conf: Main configuration
  • pg_hba.conf: Client authentication
  • pg_ident.conf: User name mapping

Key Configuration Settings (/etc/postgresql/14/main/postgresql.conf):

# Connection Settings
listen_addresses = 'localhost' # IP addresses to listen on
port = 5432 # PostgreSQL port
max_connections = 100 # Maximum concurrent connections

# Memory Settings
shared_buffers = 2GB # Shared memory buffers (25% of RAM)
work_mem = 64MB # Memory for query operations
maintenance_work_mem = 512MB # Memory for maintenance operations

# Logging Settings
logging_collector = on # Enable log collection
log_directory = 'pg_log' # Log file directory
log_filename = 'postgresql-%Y-%m-%d.log' # Log file naming
log_min_duration_statement = 1000 # Log slow queries (1 second)
log_line_prefix = '%t [%p]: [%l-1] user=%u,db=%d,app=%a,client=%h '

Client Authentication (/etc/postgresql/14/main/pg_hba.conf):

# TYPE  DATABASE        USER            ADDRESS                 METHOD

# Local connections
local all postgres peer
local all admin md5

# IPv4 local connections
host all all 127.0.0.1/32 md5
host ecommerce_db app_user 192.168.1.0/24 md5

# IPv6 local connections
host all all ::1/128 md5

Apply Configuration Changes:

# Reload configuration
sudo systemctl reload postgresql

# Restart if needed
sudo systemctl restart postgresql

PostgreSQL Backup and Recovery

Logical Backups with pg_dump:

# Backup single database
pg_dump -h localhost -U admin -W ecommerce_db > ecommerce_backup.sql

# Compressed backup
pg_dump -h localhost -U admin -W -Fc ecommerce_db > ecommerce_backup.backup

# Backup all databases
pg_dumpall -h localhost -U postgres > all_databases.sql

# Backup with custom format (recommended)
pg_dump -h localhost -U admin -W -Fc -b -v -f ecommerce_$(date +%Y%m%d).backup ecommerce_db

Automated Backup Script:

#!/bin/bash
# /opt/postgresql_backup.sh

# Variables
BACKUP_DIR="/backup/postgresql"
DATE=$(date +%Y%m%d_%H%M%S)
PGUSER="backup_user"
PGPASSWORD="BackupPass123!"
export PGPASSWORD
DATABASES="ecommerce_db crm_db inventory_db"

# Create backup directory
mkdir -p $BACKUP_DIR

# Backup each database
for db in $DATABASES; do
echo "Backing up database: $db"
pg_dump -h localhost -U $PGUSER -Fc -b -v \
-f $BACKUP_DIR/${db}_${DATE}.backup $db
done

# Remove backups older than 30 days
find $BACKUP_DIR -name "*.backup" -mtime +30 -delete

echo "PostgreSQL backup completed: $(date)"

Database Restore:

# Restore from SQL dump
psql -h localhost -U admin -W ecommerce_db ``<`` ecommerce_backup.sql

# Restore from custom format
pg_restore -h localhost -U admin -W -d ecommerce_db ecommerce_backup.backup

# Create database and restore
createdb -U admin ecommerce_db_restored
pg_restore -h localhost -U admin -W -d ecommerce_db_restored ecommerce_backup.backup

PostgreSQL Troubleshooting

Problem 1: Connection Refused

Symptoms: FATAL: could not connect to server

Troubleshooting:

# Check if PostgreSQL is running
sudo systemctl status postgresql

# Check if listening on correct port
sudo netstat -tlnp | grep 5432

# Check configuration
sudo -u postgres psql -c "SHOW listen_addresses;"
sudo -u postgres psql -c "SHOW port;"

Solutions:

# Edit postgresql.conf
listen_addresses = 'localhost,192.168.1.100' # Add your IP
port = 5432

# Restart PostgreSQL
sudo systemctl restart postgresql

Problem 2: Authentication Failed

Check authentication configuration:

# View pg_hba.conf
sudo cat /etc/postgresql/14/main/pg_hba.conf

# Check user exists
sudo -u postgres psql -c "\du"

Fix authentication:

# Add user authentication to pg_hba.conf
echo "host ecommerce_db app_user 192.168.1.0/24 md5" >> /etc/postgresql/14/main/pg_hba.conf

# Reload configuration
sudo systemctl reload postgresql

Problem 3: Database Corruption

Check database integrity:

-- Connect to database
\c ecommerce_db

-- Check for corruption
SELECT datname, pg_database_size(datname) FROM pg_database;

-- Vacuum and analyze
VACUUM FULL VERBOSE ANALYZE;

-- Check specific table
SELECT pg_relation_size('customers');

Repair corrupted database:

# Stop PostgreSQL
sudo systemctl stop postgresql

# Check filesystem
sudo fsck /dev/sdb1

# Start in single-user mode
sudo -u postgres postgres --single -D /var/lib/postgresql/14/main ecommerce_db

# Run REINDEX
REINDEX DATABASE ecommerce_db;

PostgreSQL Performance Monitoring

Built-in Performance Views:

-- Active queries
SELECT pid, query, state, query_start
FROM pg_stat_activity
WHERE state = 'active';

-- Database statistics
SELECT datname, numbackends, xact_commit, xact_rollback
FROM pg_stat_database;

-- Table statistics
SELECT schemaname, tablename, seq_scan, seq_tup_read, idx_scan
FROM pg_stat_user_tables;

-- Index usage
SELECT schemaname, tablename, indexname, idx_scan
FROM pg_stat_user_indexes
ORDER BY idx_scan DESC;

Query Performance Analysis:

-- Enable query execution time logging
ALTER SYSTEM SET log_min_duration_statement = 1000;
SELECT pg_reload_conf();

-- Explain query execution plan
EXPLAIN (ANALYZE, BUFFERS)
SELECT * FROM orders o
JOIN customers c ON o.customer_id = c.customer_id
WHERE c.city = 'Mumbai';

Performance Optimization:

-- Create indexes for common queries
CREATE INDEX CONCURRENTLY idx_customers_city ON customers(city);
CREATE INDEX CONCURRENTLY idx_orders_customer_id ON orders(customer_id);
CREATE INDEX CONCURRENTLY idx_orders_date ON orders(order_date);

-- Update table statistics
ANALYZE customers;
ANALYZE orders;
ANALYZE products;

Microsoft SQL Server Administration

Installation and Configuration

SQL Server Editions:

  • Express: Free, limited to 10GB database size
  • Standard: Full features, moderate scalability (₹1,50,000/year)
  • Enterprise: Advanced features, unlimited scalability (₹5,00,000+/year)

Windows Installation Steps:

  1. Download SQL Server installation media
  2. Run setup.exe as Administrator
  3. Choose "New SQL Server stand-alone installation"
  4. Select features: Database Engine, Management Tools
  5. Configure instance (default or named instance)
  6. Set authentication mode (Windows + SQL Server)
  7. Configure data directories and service accounts

Post-Installation Configuration:

-- Connect with SQL Server Management Studio (SSMS)
-- Create administrative login
CREATE LOGIN [admin_user] WITH PASSWORD = 'SecurePassword123!';
CREATE USER [admin_user] FOR LOGIN [admin_user];
ALTER SERVER ROLE [sysadmin] ADD MEMBER [admin_user];

-- Configure SQL Server settings
EXEC sp_configure 'show advanced options', 1;
RECONFIGURE;

-- Set maximum memory (leave 2-4GB for OS)
EXEC sp_configure 'max server memory', 6144; -- 6GB for 8GB system
RECONFIGURE;

SQL Server Security Configuration

Enable SQL Server Authentication:

-- Check current authentication mode
SELECT SERVERPROPERTY('IsIntegratedSecurityOnly');

-- Enable mixed mode (Windows + SQL Server authentication)
-- This requires SQL Server restart
USE master;
GO
EXEC xp_instance_regwrite N'HKEY_LOCAL_MACHINE',
N'Software\Microsoft\MSSQLServer\MSSQLServer',
N'LoginMode', REG_DWORD, 2;

Database Security Best Practices:

-- Create application-specific login and user
CREATE LOGIN [ecommerce_app] WITH PASSWORD = 'AppPassword123!';
USE ecommerce_db;
CREATE USER [ecommerce_user] FOR LOGIN [ecommerce_app];

-- Grant minimum necessary permissions
ALTER ROLE [db_datareader] ADD MEMBER [ecommerce_user];
ALTER ROLE [db_datawriter] ADD MEMBER [ecommerce_user];

-- Grant specific permissions on stored procedures
GRANT EXECUTE ON [dbo].[GetCustomerOrders] TO [ecommerce_user];

SQL Server Backup and Recovery

Full Database Backup:

-- Full backup
BACKUP DATABASE [ecommerce_db]
TO DISK = N'C:\Backup\ecommerce_db_full.bak'
WITH FORMAT, INIT, COMPRESSION,
NAME = N'ecommerce_db Full Backup';

-- Differential backup (changes since last full backup)
BACKUP DATABASE [ecommerce_db]
TO DISK = N'C:\Backup\ecommerce_db_diff.bak'
WITH DIFFERENTIAL, COMPRESSION,
NAME = N'ecommerce_db Differential Backup';

-- Transaction log backup (for point-in-time recovery)
BACKUP LOG [ecommerce_db]
TO DISK = N'C:\Backup\ecommerce_db_log.trn'
WITH COMPRESSION,
NAME = N'ecommerce_db Log Backup';

Automated Backup Maintenance Plan:

-- Create maintenance plan for automated backups
-- Full backup: Weekly (Sunday 2 AM)
-- Differential backup: Daily (2 AM, except Sunday)
-- Log backup: Every 15 minutes
-- Cleanup old backups: Keep 30 days

USE msdb;
GO

-- Create backup job
EXEC dbo.sp_add_job
@job_name = N'ecommerce_db Full Backup';

-- Add backup step
EXEC sp_add_jobstep
@job_name = N'ecommerce_db Full Backup',
@step_name = N'Backup Database',
@command = N'BACKUP DATABASE [ecommerce_db]
TO DISK = N''C:\Backup\ecommerce_db_full.bak''
WITH FORMAT, INIT, COMPRESSION';

-- Schedule job (Weekly on Sunday at 2 AM)
EXEC dbo.sp_add_schedule
@schedule_name = N'Weekly Full Backup',
@freq_type = 8, -- Weekly
@freq_interval = 1, -- Sunday
@active_start_time = 20000; -- 2:00 AM

Database Restore:

-- Restore full database
RESTORE DATABASE [ecommerce_db_restored]
FROM DISK = N'C:\Backup\ecommerce_db_full.bak'
WITH REPLACE, RECOVERY;

-- Point-in-time restore
RESTORE DATABASE [ecommerce_db_restored]
FROM DISK = N'C:\Backup\ecommerce_db_full.bak'
WITH REPLACE, NORECOVERY;

RESTORE DATABASE [ecommerce_db_restored]
FROM DISK = N'C:\Backup\ecommerce_db_diff.bak'
WITH NORECOVERY;

RESTORE LOG [ecommerce_db_restored]
FROM DISK = N'C:\Backup\ecommerce_db_log.trn'
WITH STOPAT = '2024-03-15 14:30:00', RECOVERY;

SQL Server Performance Monitoring

Dynamic Management Views (DMVs):

-- Check currently running queries
SELECT
r.session_id,
r.status,
r.command,
r.cpu_time,
r.total_elapsed_time,
t.text AS query_text
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t;

-- Find expensive queries
SELECT TOP 10
qs.sql_handle,
qs.execution_count,
qs.total_worker_time / qs.execution_count AS avg_cpu_time,
qs.total_elapsed_time / qs.execution_count AS avg_elapsed_time,
t.text AS query_text
FROM sys.dm_exec_query_stats qs
CROSS APPLY sys.dm_exec_sql_text(qs.sql_handle) t
ORDER BY avg_cpu_time DESC;

-- Check index usage
SELECT
i.name AS index_name,
s.user_seeks,
s.user_scans,
s.user_lookups,
s.user_updates
FROM sys.dm_db_index_usage_stats s
JOIN sys.indexes i ON s.object_id = i.object_id AND s.index_id = i.index_id
WHERE s.database_id = DB_ID('ecommerce_db');

Performance Optimization:

-- Update statistics
UPDATE STATISTICS customers WITH FULLSCAN;
UPDATE STATISTICS orders WITH FULLSCAN;

-- Rebuild fragmented indexes
ALTER INDEX ALL ON customers REBUILD;
ALTER INDEX ALL ON orders REBUILD;

-- Check database file sizes and growth
SELECT
name,
size/128.0 AS current_size_mb,
CASE max_size
WHEN 0 THEN 'No growth allowed'
WHEN -1 THEN 'Unlimited growth'
ELSE CAST(max_size/128.0 AS VARCHAR(10)) + ' MB'
END AS max_size,
growth AS growth_increment
FROM sys.database_files;

SQL Server Troubleshooting

Problem 1: SQL Server Service Won't Start

Check Windows Event Log:

# Check SQL Server error log
Get-EventLog -LogName Application -Source MSSQLSERVER -Newest 50

# Check Windows Service status
Get-Service -Name MSSQLSERVER

Common Solutions:

-- Check if databases can be attached
SELECT name, state_desc FROM sys.databases;

-- Repair system databases if needed
sqlcmd -E -S localhost -Q "DBCC CHECKDB('master')"
sqlcmd -E -S localhost -Q "DBCC CHECKDB('msdb')"

Problem 2: High CPU Usage

Identify Resource-Intensive Queries:

-- Check current CPU usage
SELECT
r.session_id,
r.cpu_time,
r.reads,
r.writes,
t.text
FROM sys.dm_exec_requests r
CROSS APPLY sys.dm_exec_sql_text(r.sql_handle) t
WHERE r.session_id > 50
ORDER BY r.cpu_time DESC;

-- Check wait statistics
SELECT TOP 10
wait_type,
waiting_tasks_count,
wait_time_ms,
max_wait_time_ms,
signal_wait_time_ms
FROM sys.dm_os_wait_stats
ORDER BY wait_time_ms DESC;

Problem 3: Database Corruption

Check Database Integrity:

-- Check database consistency
DBCC CHECKDB('ecommerce_db') WITH NO_INFOMSGS;

-- Check specific table
DBCC CHECKTABLE('customers') WITH NO_INFOMSGS;

-- Repair database if corruption found
DBCC CHECKDB('ecommerce_db', REPAIR_ALLOW_DATA_LOSS);

MongoDB Administration

Installation and Configuration

Ubuntu Installation:

# Import MongoDB public GPG key
curl -fsSL https://pgp.mongodb.com/server-6.0.asc | sudo gpg -o /usr/share/keyrings/mongodb-server-6.0.gpg --dearmor

# Add MongoDB repository
echo "deb [ arch=amd64,arm64 signed-by=/usr/share/keyrings/mongodb-server-6.0.gpg ] https://repo.mongodb.org/apt/ubuntu jammy/mongodb-org/6.0 multiverse" | sudo tee /etc/apt/sources.list.d/mongodb-org-6.0.list

# Install MongoDB
sudo apt update
sudo apt install -y mongodb-org

# Start and enable MongoDB service
sudo systemctl start mongod
sudo systemctl enable mongod

Basic Security Configuration:

// Connect to MongoDB shell
mongosh

// Switch to admin database
use admin

// Create administrative user
db.createUser({
user: "admin",
pwd: "SecurePassword123!",
roles: [ { role: "userAdminAnyDatabase", db: "admin" },
{ role: "readWriteAnyDatabase", db: "admin" } ]
});

// Enable authentication
exit

Enable Authentication (/etc/mongod.conf):

# MongoDB configuration file
storage:
dbPath: /var/lib/mongodb
journal:
enabled: true

systemLog:
destination: file
logAppend: true
path: /var/log/mongodb/mongod.log

net:
port: 27017
bindIp: 127.0.0.1 # Bind to localhost only

security:
authorization: enabled # Enable authentication

processManagement:
fork: true
pidFilePath: /var/run/mongodb/mongod.pid

Restart MongoDB:

sudo systemctl restart mongod

# Connect with authentication
mongosh -u admin -p --authenticationDatabase admin

MongoDB Database Operations

Create Database and Collections:

// Switch to application database
use ecommerce_db

// Create application user
db.createUser({
user: "app_user",
pwd: "AppPassword123!",
roles: [ { role: "readWrite", db: "ecommerce_db" } ]
});

// Create collections (similar to tables)
db.createCollection("customers");
db.createCollection("products");
db.createCollection("orders");

// Insert sample data
db.customers.insertOne({
_id: ObjectId(),
firstName: "Priya",
lastName: "Sharma",
email: "priya.sharma@email.com",
phone: "9876543210",
address: {
street: "123 MG Road",
city: "Mumbai",
state: "Maharashtra",
zipCode: "400001"
},
registrationDate: new Date()
});

MongoDB Backup and Recovery

Database Backup with mongodump:

# Backup specific database
mongodump --host localhost:27017 --db ecommerce_db --out /backup/mongodb/

# Backup with authentication
mongodump --host localhost:27017 --username admin --password --authenticationDatabase admin --db ecommerce_db --out /backup/mongodb/

# Backup all databases
mongodump --host localhost:27017 --username admin --password --authenticationDatabase admin --out /backup/mongodb/all/

# Compressed backup
mongodump --host localhost:27017 --db ecommerce_db --archive=/backup/mongodb/ecommerce_db.archive --gzip

Automated Backup Script:

#!/bin/bash
# /opt/mongodb_backup.sh

# Variables
BACKUP_DIR="/backup/mongodb"
DATE=$(date +%Y%m%d_%H%M%S)
MONGO_HOST="localhost:27017"
MONGO_USER="backup_user"
MONGO_PASS="BackupPass123!"
DATABASES="ecommerce_db crm_db analytics_db"

# Create backup directory
mkdir -p $BACKUP_DIR

# Backup each database
for db in $DATABASES; do
echo "Backing up database: $db"
mongodump --host $MONGO_HOST \
--username $MONGO_USER \
--password $MONGO_PASS \
--authenticationDatabase admin \
--db $db \
--archive=$BACKUP_DIR/${db}_${DATE}.archive \
--gzip
done

# Remove backups older than 30 days
find $BACKUP_DIR -name "*.archive" -mtime +30 -delete

echo "MongoDB backup completed: $(date)"

Database Restore:

# Restore specific database
mongorestore --host localhost:27017 --username admin --password --authenticationDatabase admin --db ecommerce_db_restored /backup/mongodb/ecommerce_db/

# Restore from compressed archive
mongorestore --host localhost:27017 --username admin --password --authenticationDatabase admin --archive=/backup/mongodb/ecommerce_db.archive --gzip

# Restore and drop existing collections
mongorestore --host localhost:27017 --username admin --password --authenticationDatabase admin --db ecommerce_db --drop /backup/mongodb/ecommerce_db/

MongoDB Performance Monitoring

Built-in Performance Tools:

// Database statistics
db.stats()

// Collection statistics
db.customers.stats()

// Current operations
db.currentOp()

// Server status
db.serverStatus()

// Check index usage
db.customers.aggregate([
{ $indexStats: {} }
])

// Find slow operations
db.setProfilingLevel(2, { slowms: 1000 }) // Profile queries > 1 second
db.system.profile.find().sort({ ts: -1 }).limit(10)

Performance Optimization:

// Create indexes for common queries
db.customers.createIndex({ "email": 1 }, { unique: true })
db.customers.createIndex({ "address.city": 1 })
db.orders.createIndex({ "customerId": 1, "orderDate": -1 })
db.products.createIndex({ "category": 1, "price": 1 })

// Compound index for complex queries
db.orders.createIndex({
"customerId": 1,
"status": 1,
"orderDate": -1
})

// Text index for search functionality
db.products.createIndex({
"name": "text",
"description": "text"
})

// Check index effectiveness
db.customers.find({ "address.city": "Mumbai" }).explain("executionStats")

MongoDB Troubleshooting

Problem 1: MongoDB Service Issues

Check Service Status:

# Service status
sudo systemctl status mongod

# Check MongoDB log
sudo tail -f /var/log/mongodb/mongod.log

# Check disk space
df -h /var/lib/mongodb

# Check MongoDB process
ps aux | grep mongod

Common Issues and Solutions:

# Fix permission issues
sudo chown -R mongodb:mongodb /var/lib/mongodb
sudo chown -R mongodb:mongodb /var/log/mongodb

# Fix configuration issues
mongod --config /etc/mongod.conf --fork

# Repair database
mongod --repair --dbpath /var/lib/mongodb

Problem 2: High Memory Usage

Monitor Memory Usage:

// Check memory usage
db.serverStatus().mem

// Check database sizes
db.runCommand("listCollections").cursor.firstBatch.forEach(
function(collection) {
print(collection.name + ": " + db[collection.name].stats().size + " bytes");
}
)

// Check index sizes
db.runCommand("collStats", "customers").indexSizes

Optimize Memory Usage:

// Remove unused indexes
db.customers.dropIndex("unused_index_name")

// Compact collections to reclaim space
db.runCommand({ compact: "customers" })

// Enable compression
db.createCollection("new_collection", {
storageEngine: {
wiredTiger: {
configString: "block_compressor=zlib"
}
}
})

Problem 3: Query Performance Issues

Identify Slow Queries:

// Enable profiling for slow queries
db.setProfilingLevel(1, { slowms: 100 })

// Check profiling data
db.system.profile.find().limit(10).sort({ ts: -1 }).pretty()

// Analyze specific query performance
db.customers.find({ "address.city": "Mumbai" }).explain("executionStats")

Query Optimization:

// Bad query (full collection scan)
db.customers.find({ "address.city": "Mumbai", "registrationDate": { $gte: new Date("2024-01-01") } })

// Good query (with proper index)
db.customers.createIndex({ "address.city": 1, "registrationDate": 1 })
db.customers.find({ "address.city": "Mumbai", "registrationDate": { $gte: new Date("2024-01-01") } })

// Use projection to limit returned fields
db.customers.find(
{ "address.city": "Mumbai" },
{ "firstName": 1, "lastName": 1, "email": 1 }
)

Cross-Platform Database Optimization Strategies

General Performance Principles

1. Hardware Optimization

Storage Considerations:

  • SSDs vs HDDs: SSDs provide 10-100x faster random I/O
  • RAID Configuration: RAID 10 for databases (performance + redundancy)
  • Separate drives: OS, database files, logs, and temp files on different drives

Memory Allocation:

# General rule: Allocate 60-80% of available RAM to database
# For 16GB server:
# - MySQL: innodb_buffer_pool_size = 12G
# - PostgreSQL: shared_buffers = 4G, work_mem = 64MB
# - SQL Server: max server memory = 12288 MB
# - MongoDB: Relies on OS filesystem cache

CPU Considerations:

  • More cores generally better than higher clock speed
  • Enable hyperthreading for OLTP workloads
  • Disable for analytics/reporting workloads

2. Index Strategy Across Platforms

Common Index Types:

-- B-Tree indexes (most common, all platforms)
-- MySQL
CREATE INDEX idx_customer_email ON customers(email);

-- PostgreSQL
CREATE INDEX CONCURRENTLY idx_customer_email ON customers(email);

-- SQL Server
CREATE INDEX idx_customer_email ON customers(email);
// MongoDB
db.customers.createIndex({ "email": 1 })

Composite Index Guidelines:

-- Order matters: most selective columns first
CREATE INDEX idx_orders_status_date ON orders(status, order_date);

-- For queries like: WHERE status = 'pending' AND order_date >= '2024-01-01'

3. Query Optimization Best Practices

**Avoid SELECT ***:

-- Bad
SELECT * FROM customers WHERE city = 'Mumbai';

-- Good
SELECT customer_id, first_name, last_name, email FROM customers WHERE city = 'Mumbai';

Use LIMIT/TOP:

-- MySQL/PostgreSQL
SELECT * FROM orders ORDER BY order_date DESC LIMIT 50;

-- SQL Server
SELECT TOP 50 * FROM orders ORDER BY order_date DESC;
// MongoDB
db.orders.find().sort({ orderDate: -1 }).limit(50)

Proper WHERE Clause Usage:

-- Use indexed columns in WHERE clauses
-- Avoid functions in WHERE clauses

-- Bad
SELECT * FROM orders WHERE YEAR(order_date) = 2024;

-- Good
SELECT * FROM orders WHERE order_date >= '2024-01-01' AND order_date ``<`` '2025-01-01';

Monitoring and Alerting Setup

Key Metrics to Monitor:

Performance Metrics:

  • Query response time (target: <100ms for simple queries)
  • Transactions per second (TPS)
  • Connection pool usage
  • Cache hit ratios
  • Index usage statistics

Resource Metrics:

  • CPU utilization (target: <80% average)
  • Memory usage
  • Disk I/O (IOPS, latency, throughput)
  • Network throughput

Availability Metrics:

  • Service uptime
  • Failed connection attempts
  • Backup success/failure
  • Replication lag (if applicable)

Platform-Specific Monitoring Commands:

MySQL:

-- Performance monitoring
SHOW PROCESSLIST;
SHOW ENGINE INNODB STATUS;
SELECT * FROM performance_schema.events_statements_summary_by_digest ORDER BY SUM_TIMER_WAIT DESC LIMIT 10;

PostgreSQL:

-- Activity monitoring
SELECT * FROM pg_stat_activity;
SELECT * FROM pg_stat_database;
SELECT * FROM pg_stat_user_tables;

SQL Server:

-- Performance monitoring
SELECT * FROM sys.dm_exec_requests;
SELECT * FROM sys.dm_os_wait_stats;
SELECT * FROM sys.dm_db_index_usage_stats;

MongoDB:

// Performance monitoring
db.serverStatus()
db.currentOp()
db.stats()

Automated Monitoring Script

Universal Database Health Check Script:

#!/bin/bash
# /opt/db_health_check.sh

# Variables
LOG_FILE="/var/log/db_health_check.log"
DATE=$(date '+%Y-%m-%d %H:%M:%S')
ALERT_EMAIL="admin@company.com"

echo "[$DATE] Starting database health check" >> $LOG_FILE

# MySQL Health Check
if systemctl is-active --quiet mysql; then
echo "[$DATE] MySQL: Service is running" >> $LOG_FILE

# Check connections
MYSQL_CONNECTIONS=$(mysql -u monitor -p'MonitorPass123!' -e "SHOW STATUS LIKE 'Threads_connected';" | awk 'NR==2{print $2}')
if [ "$MYSQL_CONNECTIONS" -gt 100 ]; then
echo "[$DATE] MySQL: WARNING - High connection count: $MYSQL_CONNECTIONS" >> $LOG_FILE
echo "High MySQL connections: $MYSQL_CONNECTIONS" | mail -s "Database Alert" $ALERT_EMAIL
fi

# Check slow queries
SLOW_QUERIES=$(mysql -u monitor -p'MonitorPass123!' -e "SHOW STATUS LIKE 'Slow_queries';" | awk 'NR==2{print $2}')
echo "[$DATE] MySQL: Slow queries: $SLOW_QUERIES" >> $LOG_FILE
else
echo "[$DATE] MySQL: Service is NOT running!" >> $LOG_FILE
echo "MySQL service is down" | mail -s "CRITICAL: Database Alert" $ALERT_EMAIL
fi

# PostgreSQL Health Check
if systemctl is-active --quiet postgresql; then
echo "[$DATE] PostgreSQL: Service is running" >> $LOG_FILE

# Check connections
PG_CONNECTIONS=$(sudo -u postgres psql -t -c "SELECT count(*) FROM pg_stat_activity;")
echo "[$DATE] PostgreSQL: Active connections: $PG_CONNECTIONS" >> $LOG_FILE

# Check database sizes
sudo -u postgres psql -c "SELECT datname, pg_size_pretty(pg_database_size(datname)) FROM pg_database;" >> $LOG_FILE
else
echo "[$DATE] PostgreSQL: Service is NOT running!" >> $LOG_FILE
fi

# MongoDB Health Check
if systemctl is-active --quiet mongod; then
echo "[$DATE] MongoDB: Service is running" >> $LOG_FILE

# Check connections and memory
mongosh --quiet --eval "
var status = db.serverStatus();
print('Connections: ' + status.connections.current + '/' + status.connections.available);
print('Memory: ' + (status.mem.resident) + 'MB resident, ' + (status.mem.virtual) + 'MB virtual');
" >> $LOG_FILE
else
echo "[$DATE] MongoDB: Service is NOT running!" >> $LOG_FILE
fi

# Disk space check
DISK_USAGE=$(df -h /var/lib | awk 'NR==2{print $5}' | sed 's/%//')
if [ "$DISK_USAGE" -gt 85 ]; then
echo "[$DATE] WARNING: High disk usage: ${DISK_USAGE}%" >> $LOG_FILE
echo "High disk usage: ${DISK_USAGE}%" | mail -s "Database Storage Alert" $ALERT_EMAIL
fi

echo "[$DATE] Database health check completed" >> $LOG_FILE

Schedule the monitoring script:

# Add to crontab for every 5 minutes
crontab -e

# Add this line:
*/5 * * * * /opt/db_health_check.sh

Database Security Hardening

Universal Security Checklist:

1. Network Security:

# Firewall configuration (allow only necessary ports)
sudo ufw allow 3306/tcp # MySQL
sudo ufw allow 5432/tcp # PostgreSQL
sudo ufw allow 1433/tcp # SQL Server
sudo ufw allow 27017/tcp # MongoDB

# Bind to specific IP addresses, not 0.0.0.0
# MySQL: bind-address = 192.168.1.100
# PostgreSQL: listen_addresses = '192.168.1.100'
# SQL Server: TCP/IP properties
# MongoDB: bindIp: 192.168.1.100

2. Authentication and Authorization:

-- Remove default/anonymous accounts
-- MySQL
DELETE FROM mysql.user WHERE User='';
DELETE FROM mysql.user WHERE User='root' AND Host NOT IN ('localhost', '127.0.0.1', '::1');
FLUSH PRIVILEGES;

-- Create application-specific users with minimal privileges
GRANT SELECT, INSERT, UPDATE, DELETE ON ecommerce_db.* TO 'app_user'@'192.168.1.%';

3. Encryption:

# Enable SSL/TLS for connections
# MySQL: --ssl-cert, --ssl-key, --ssl-ca
# PostgreSQL: ssl = on in postgresql.conf
# SQL Server: Force Encryption = Yes
# MongoDB: ssl mode in configuration

4. Audit Logging:

-- Enable audit logging
-- MySQL: audit_log plugin
-- PostgreSQL: log_statement = 'all'
-- SQL Server: SQL Server Audit
-- MongoDB: auditLog.destination

Disaster Recovery Planning

Recovery Time Objective (RTO) and Recovery Point Objective (RPO):

Business Classifications:

  • Mission Critical: RTO < 1 hour, RPO < 15 minutes
  • Business Critical: RTO < 4 hours, RPO < 1 hour
  • Important: RTO < 24 hours, RPO < 4 hours
  • Non-Critical: RTO < 72 hours, RPO < 24 hours

Multi-Platform Backup Strategy:

#!/bin/bash
# Comprehensive backup script for mixed environment

# MySQL backup
mysqldump --all-databases --single-transaction | gzip > /backup/mysql_$(date +%Y%m%d).sql.gz

# PostgreSQL backup
pg_dumpall | gzip > /backup/postgresql_$(date +%Y%m%d).sql.gz

# SQL Server backup (via sqlcmd)
sqlcmd -E -Q "BACKUP DATABASE [ecommerce_db] TO DISK = N'/backup/sqlserver_$(date +%Y%m%d).bak'"

# MongoDB backup
mongodump --archive=/backup/mongodb_$(date +%Y%m%d).archive --gzip

# Sync to offsite location
rsync -az /backup/ user@backup-server:/remote/backup/

Testing Recovery Procedures:

#!/bin/bash
# Quarterly DR test script

# Test MySQL restore
mysql `<` /backup/mysql_test.sql

# Test PostgreSQL restore
dropdb test_restore_db
createdb test_restore_db
gunzip `<` /backup/postgresql_test.sql.gz | psql test_restore_db

# Test SQL Server restore
sqlcmd -E -Q "RESTORE DATABASE [test_db] FROM DISK = N'/backup/sqlserver_test.bak'"

# Test MongoDB restore
mongorestore --archive=/backup/mongodb_test.archive --gzip --db test_restore_db

echo "DR test completed: $(date)" >> /var/log/dr_test.log

Key Takeaways

  • Each database platform has unique installation, configuration, and management procedures
  • MySQL excels in web applications and is cost-effective for small to medium businesses
  • PostgreSQL provides advanced features and reliability for complex enterprise applications
  • SQL Server integrates well with Microsoft environments and offers comprehensive enterprise features
  • MongoDB handles flexible, document-based data structures ideal for modern web applications
  • Performance optimization requires platform-specific approaches but follows common principles
  • Regular monitoring, backup testing, and security hardening are essential across all platforms
  • Automated monitoring and alerting help identify issues before they impact business operations
  • Disaster recovery planning must account for business requirements and technical capabilities
  • Understanding multiple database platforms increases career opportunities and client service capabilities

What's Next?

In the final section of this module, we'll cover database integration with applications, API development, and how databases fit into modern cloud architectures. You'll learn how to connect applications to databases securely and efficiently, and understand the role of databases in DevOps and cloud-native environments.