Skip to main content

Assessment

Assessment Overview

Duration: 90 minutes
Total Points: 100
Passing Score: 70%
Format: Mixed (Multiple Choice, Practical Scenarios, Hands-on Tasks)

This assessment evaluates your ability to manage databases in real MSP environments, from basic administration to modern cloud integration.


Part A: Fundamentals and Concepts (25 points)

Question 1 (5 points)

Multiple Choice: Your client, a manufacturing company in Chennai, is experiencing slow application performance. They currently store all data in Excel files on a shared network drive. Which database benefit would most directly address their primary concern?

A) Better data security and access control
B) Improved query performance and concurrent access
C) Automated backup and disaster recovery
D) Compliance with Indian data protection laws

Correct Answer: B
Explanation: While all options are benefits, the primary concern (slow performance with Excel files) is directly addressed by database query optimization and concurrent access capabilities.

Question 2 (5 points)

Multiple Choice: You're designing a database for a Delhi-based e-commerce client who needs to store product catalogs with varying attributes (electronics have different specs than clothing). Which database type would be most appropriate?

A) Relational database (MySQL) - best for structured, consistent data
B) NoSQL document database (MongoDB) - flexible schema for varying attributes
C) Graph database (Neo4j) - optimized for relationship mapping
D) In-memory database (Redis) - fastest query performance

Correct Answer: B
Explanation: NoSQL document databases like MongoDB excel at storing semi-structured data with varying schemas, perfect for product catalogs with different attribute sets.

Question 3 (5 points)

Scenario: A Mumbai law firm wants to move their client database to the cloud but is concerned about data sovereignty under Indian laws. What combination of factors should you recommend?

A) Any global cloud provider with standard encryption
B) Indian cloud provider with data residency guarantees
C) Hybrid cloud with sensitive data on-premises
D) Global cloud provider with Indian data centers and PDPB compliance

Correct Answer: D
Explanation: Global providers with Indian presence offer best of both worlds - compliance with local regulations while providing enterprise-grade services.

Question 4 (10 points)

Short Answer: Explain the difference between ACID properties and BASE principles in database systems. Give a real business scenario where you'd recommend each approach.

Sample Answer: ACID (Atomicity, Consistency, Isolation, Durability):

  • Ensures strict data consistency and reliability
  • All operations complete successfully or none do
  • Best for financial transactions, inventory management
  • Example: Banking system where account balances must always be accurate

BASE (Basically Available, Soft state, Eventual consistency):

  • Prioritizes availability over immediate consistency
  • Data may be temporarily inconsistent but eventually becomes consistent
  • Best for high-scale social media, content delivery
  • Example: Facebook posts - it's okay if likes/comments appear slightly delayed across servers

Business Recommendation:

  • ACID for ERP, financial systems, inventory management
  • BASE for social media features, analytics, recommendation engines

Part B: SQL and Query Optimization (30 points)

Question 5 (15 points)

Practical Scenario: You're managing a database for "TechMart Bangalore," an electronics retailer. Given the following table structures, write SQL queries for the business requirements below:

-- Tables
CREATE TABLE customers (
customer_id INT PRIMARY KEY,
name VARCHAR(100),
email VARCHAR(100),
city VARCHAR(50),
registration_date DATE
);

CREATE TABLE products (
product_id INT PRIMARY KEY,
name VARCHAR(100),
category VARCHAR(50),
price DECIMAL(10,2),
stock_quantity INT
);

CREATE TABLE orders (
order_id INT PRIMARY KEY,
customer_id INT,
order_date DATE,
total_amount DECIMAL(10,2),
FOREIGN KEY (customer_id) REFERENCES customers(customer_id)
);

CREATE TABLE order_items (
order_item_id INT PRIMARY KEY,
order_id INT,
product_id INT,
quantity INT,
unit_price DECIMAL(10,2),
FOREIGN KEY (order_id) REFERENCES orders(order_id),
FOREIGN KEY (product_id) REFERENCES products(product_id)
);

Write queries for:

a) Find the top 5 customers by total purchase amount in 2024:

SELECT 
c.customer_id,
c.name,
c.email,
SUM(o.total_amount) as total_purchases
FROM customers c
JOIN orders o ON c.customer_id = o.customer_id
WHERE YEAR(o.order_date) = 2024
GROUP BY c.customer_id, c.name, c.email
ORDER BY total_purchases DESC
LIMIT 5;

b) Identify products that are low in stock (less than 10 units) and have been ordered in the last 30 days:

SELECT DISTINCT
p.product_id,
p.name,
p.stock_quantity,
p.category
FROM products p
JOIN order_items oi ON p.product_id = oi.product_id
JOIN orders o ON oi.order_id = o.order_id
WHERE p.stock_quantity < 10
AND o.order_date >= DATE_SUB(CURDATE(), INTERVAL 30 DAY)
ORDER BY p.stock_quantity ASC;

c) Calculate monthly revenue by product category for the current year:

SELECT 
p.category,
MONTH(o.order_date) as month,
MONTHNAME(o.order_date) as month_name,
SUM(oi.quantity * oi.unit_price) as category_revenue
FROM products p
JOIN order_items oi ON p.product_id = oi.product_id
JOIN orders o ON oi.order_id = o.order_id
WHERE YEAR(o.order_date) = YEAR(CURDATE())
GROUP BY p.category, MONTH(o.order_date)
ORDER BY p.category, month;

Question 6 (15 points)

Performance Optimization: The query below is running slowly on a table with 1 million orders. Identify performance issues and provide optimized solutions:

-- Slow query
SELECT c.name, o.order_date, o.total_amount
FROM customers c, orders o
WHERE c.customer_id = o.customer_id
AND o.order_date LIKE '2024%'
AND o.total_amount > 5000
ORDER BY o.order_date DESC;

Issues Identified:

  1. Old-style JOIN syntax - harder to read and maintain
  2. LIKE with wildcard for dates - cannot use date indexes efficiently
  3. Missing indexes - likely no index on order_date or total_amount
  4. Inefficient date comparison - string comparison instead of date comparison

Optimized Query:

-- Optimized version
SELECT c.name, o.order_date, o.total_amount
FROM customers c
INNER JOIN orders o ON c.customer_id = o.customer_id
WHERE o.order_date >= '2024-01-01'
AND o.order_date < '2025-01-01'
AND o.total_amount > 5000
ORDER BY o.order_date DESC;

Required Indexes:

-- Composite index for optimal performance
CREATE INDEX idx_orders_date_amount ON orders (order_date, total_amount);

-- Alternative single indexes if composite isn't suitable
CREATE INDEX idx_orders_date ON orders (order_date);
CREATE INDEX idx_orders_amount ON orders (total_amount);

Performance Improvements Expected:

  • 50-90% faster execution time
  • Better use of date indexes
  • More efficient JOIN processing
  • Clearer execution plan

Part C: Database Administration (25 points)

Question 7 (15 points)

Scenario: You're the MSP administrator for "Global Trading Co." They use SQL Server 2019 and experienced a critical failure yesterday. The transaction log is full, and users cannot connect. Walk through your troubleshooting and recovery process:

Step-by-step Solution:

1. Immediate Assessment (2-3 minutes):

-- Check database status
SELECT name, state_desc FROM sys.databases;

-- Check log file space
SELECT
name,
physical_name,
size * 8.0 / 1024 as size_mb,
FILEPROPERTY(name, 'SpaceUsed') * 8.0 / 1024 as used_mb
FROM sys.master_files
WHERE database_id = DB_ID('YourDatabase');

2. Emergency Log Space Creation (5 minutes):

-- Add temporary log file
ALTER DATABASE [GlobalTrading]
ADD LOG FILE (
NAME = 'GlobalTrading_Log_Temp',
FILENAME = 'C:\TempLogs\GlobalTrading_temp.ldf',
SIZE = 1GB
);

3. Backup and Truncate Log (10 minutes):

-- Backup transaction log to free space
BACKUP LOG [GlobalTrading]
TO DISK = 'C:\Backups\GlobalTrading_Emergency_Log.trn'
WITH FORMAT, INIT;

-- If in simple recovery mode:
-- BACKUP LOG [GlobalTrading] WITH TRUNCATE_ONLY; -- SQL 2008 and earlier
-- For newer versions, just switch to simple mode temporarily

4. Restore Normal Operations (5 minutes):

-- Resize original log file
ALTER DATABASE [GlobalTrading]
MODIFY FILE (
NAME = 'GlobalTrading_Log',
SIZE = 2GB
);

-- Remove temporary log file
ALTER DATABASE [GlobalTrading]
REMOVE FILE GlobalTrading_Log_Temp;

5. Preventive Measures:

  • Set up automated log backups every 15 minutes
  • Configure log file autogrowth settings
  • Implement monitoring for log space usage
  • Create maintenance plan for regular log management

Question 8 (10 points)

Multi-Platform Question: Compare the backup strategies for MySQL and PostgreSQL. Your client runs both systems and wants consistent backup procedures.

MySQL Backup Strategy:

#!/bin/bash
# mysql_backup.sh

DB_USER="backup_user"
DB_PASS="secure_password"
BACKUP_DIR="/backup/mysql"
DATE=$(date +%Y%m%d_%H%M%S)

# Full backup with mysqldump
mysqldump --user=$DB_USER --password=$DB_PASS \
--single-transaction \
--routines \
--triggers \
--all-databases \
--master-data=2 > $BACKUP_DIR/full_backup_$DATE.sql

# Binary log backup for point-in-time recovery
mysql --user=$DB_USER --password=$DB_PASS -e "FLUSH LOGS"
cp /var/lib/mysql/mysql-bin.* $BACKUP_DIR/binlogs/

# Compress backup
gzip $BACKUP_DIR/full_backup_$DATE.sql

PostgreSQL Backup Strategy:

#!/bin/bash
# postgresql_backup.sh

DB_USER="postgres"
BACKUP_DIR="/backup/postgresql"
DATE=$(date +%Y%m%d_%H%M%S)

# Full backup with pg_dumpall
pg_dumpall -U $DB_USER > $BACKUP_DIR/full_backup_$DATE.sql

# Individual database backup
pg_dump -U $DB_USER -d production_db \
--format=custom \
--compress=9 > $BACKUP_DIR/production_db_$DATE.backup

# WAL archiving for point-in-time recovery
# (requires postgresql.conf configuration)
# archive_mode = on
# archive_command = 'cp %p /backup/postgresql/wal_archive/%f'

Unified Monitoring Script:

#!/bin/bash
# unified_backup_monitor.sh

check_mysql_backup() {
LATEST_MYSQL=$(ls -t /backup/mysql/full_backup_*.sql.gz 2>/dev/null | head -1)
if [ -z "$LATEST_MYSQL" ] || [ $(stat -c %Y "$LATEST_MYSQL") -lt $(date -d '25 hours ago' +%s) ]; then
echo "ALERT: MySQL backup is older than 24 hours"
return 1
fi
return 0
}

check_postgresql_backup() {
LATEST_PG=$(ls -t /backup/postgresql/full_backup_*.sql 2>/dev/null | head -1)
if [ -z "$LATEST_PG" ] || [ $(stat -c %Y "$LATEST_PG") -lt $(date -d '25 hours ago' +%s) ]; then
echo "ALERT: PostgreSQL backup is older than 24 hours"
return 1
fi
return 0
}

# Run checks and send alerts if needed
if ! check_mysql_backup || ! check_postgresql_backup; then
# Send alert to monitoring system
curl -X POST -H 'Content-type: application/json' \
--data '{"text":"Database backup failure detected"}' \
$SLACK_WEBHOOK_URL
fi

Part D: Cloud and Modern Architectures (20 points)

Question 9 (10 points)

Cloud Migration Scenario: "Retail Solutions Pvt Ltd" in Mumbai wants to migrate their on-premises SQL Server to Azure. They have a 200GB database with 100 concurrent users. Design a migration strategy including cost analysis.

Migration Strategy:

1. Assessment Phase (Week 1):

# Use Azure Database Migration Assistant
# Download and install DMA on source server
# Run assessment report
DmaCmd.exe /AssessmentName="RetailSolutions_Assessment"
/AssessmentDatabases="ProductionDB"
/AssessmentTargetPlatform="SqlServerOnAzureSqlDatabase"
/AssessmentOutputFolder="C:\Assessment"

2. Resource Planning:

# Azure SQL Database configuration
Service Tier: General Purpose
Compute Tier: Provisioned
vCores: 8 (for 100 concurrent users)
Storage: 250 GB (with growth buffer)
Backup: 7-day retention with geo-redundancy

3. Cost Analysis (Monthly in INR):

Azure SQL Database (GP 8 vCore):     ₹45,000
Additional Storage (50GB): ₹750
Backup Storage (geo-redundant): ₹1,200
Data Transfer: ₹500
Total Monthly Cost: ₹47,450
Annual Cost: ₹5,69,400

On-premises Annual Costs:
Hardware depreciation: ₹2,00,000
SQL Server license: ₹4,50,000
Maintenance and support: ₹1,50,000
Power and cooling: ₹80,000
IT staff (25% effort): ₹3,00,000
Total Annual Cost: ₹11,80,000

Annual Savings: ₹6,10,600 (52% reduction)

4. Migration Execution:

-- Week 2-3: Schema and data migration
-- Use Azure Database Migration Service
-- Or traditional backup/restore method

-- Create Azure SQL Database
-- Restore from .bacpac file or use DMS
-- Update connection strings in applications
-- Perform thorough testing

-- Cutover plan:
-- 1. Final incremental sync
-- 2. Update DNS/connection strings
-- 3. Monitor performance
-- 4. Rollback plan ready

Question 10 (10 points)

DevOps Integration: Design a CI/CD pipeline for database deployments that supports both development and production environments with proper approval processes.

Pipeline Design:

# azure-pipelines-database.yml
trigger:
branches:
include:
- main
- develop
paths:
include:
- database/migrations/*
- database/scripts/*

variables:
- group: database-secrets-dev
- group: database-secrets-prod

stages:
- stage: Build
displayName: 'Build and Validate'
jobs:
- job: DatabaseBuild
displayName: 'Build Database Project'
steps:
- task: MSBuild@1
inputs:
solution: 'database/DatabaseProject.sln'
configuration: 'Release'

- task: PublishBuildArtifacts@1
inputs:
PathtoPublish: 'database/bin/Release'
ArtifactName: 'database-artifacts'

- stage: DeployDev
displayName: 'Deploy to Development'
dependsOn: Build
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/develop'))
jobs:
- job: DevDeployment
displayName: 'Development Deployment'
steps:
- task: SqlAzureDacpacDeployment@1
inputs:
azureSubscription: 'Dev-Subscription'
ServerName: '$(dev-sql-server)'
DatabaseName: '$(dev-database-name)'
SqlUsername: '$(dev-sql-username)'
SqlPassword: '$(dev-sql-password)'
DacpacFile: '**/*.dacpac'
AdditionalArguments: '/p:BlockOnPossibleDataLoss=false'

- task: PowerShell@2
displayName: 'Run Integration Tests'
inputs:
targetType: 'inline'
script: |
# Run database integration tests
Invoke-Sqlcmd -ServerInstance "$(dev-sql-server)" -Database "$(dev-database-name)" -InputFile "tests/integration-tests.sql"

- stage: ApprovalGate
displayName: 'Production Approval'
dependsOn: DeployDev
condition: and(succeeded(), eq(variables['Build.SourceBranch'], 'refs/heads/main'))
jobs:
- job: waitForValidation
displayName: 'Wait for Manual Approval'
pool: server
timeoutInMinutes: 4320 # 3 days
steps:
- task: ManualValidation@0
inputs:
notifyUsers: 'dba@company.com,lead-dev@company.com'
instructions: 'Please review the database changes and approve for production deployment'

- stage: DeployProd
displayName: 'Deploy to Production'
dependsOn: ApprovalGate
condition: succeeded()
jobs:
- job: ProdDeployment
displayName: 'Production Deployment'
steps:
- task: PowerShell@2
displayName: 'Pre-Deployment Backup'
inputs:
targetType: 'inline'
script: |
# Create backup before deployment
$backupName = "PreDeployment_$(Get-Date -Format 'yyyyMMdd_HHmmss').bak"
Invoke-Sqlcmd -ServerInstance "$(prod-sql-server)" -Query "BACKUP DATABASE [$(prod-database-name)] TO DISK = 'C:\Backups\$backupName'"

- task: SqlAzureDacpacDeployment@1
inputs:
azureSubscription: 'Prod-Subscription'
ServerName: '$(prod-sql-server)'
DatabaseName: '$(prod-database-name)'
SqlUsername: '$(prod-sql-username)'
SqlPassword: '$(prod-sql-password)'
DacpacFile: '**/*.dacpac'
AdditionalArguments: '/p:BlockOnPossibleDataLoss=true'

- task: PowerShell@2
displayName: 'Post-Deployment Verification'
inputs:
targetType: 'inline'
script: |
# Verify deployment success
$result = Invoke-Sqlcmd -ServerInstance "$(prod-sql-server)" -Database "$(prod-database-name)" -Query "SELECT COUNT(*) as TableCount FROM INFORMATION_SCHEMA.TABLES"
Write-Host "Total tables: $($result.TableCount)"

# Send deployment notification
$webhook = "$(slack-webhook-url)"
$payload = @{
text = "✅ Database deployment completed successfully for $(prod-database-name)"
} | ConvertTo-Json

Invoke-RestMethod -Uri $webhook -Method Post -Body $payload -ContentType 'application/json'

Key Pipeline Features:

  • Automated testing in development environment
  • Manual approval required for production
  • Automatic backups before production deployment
  • Rollback capability with backup restoration
  • Notification system for deployment status
  • Environment-specific configurations with variable groups

Assessment Scoring Rubric

Scoring Breakdown:

  • Part A (25 points): Fundamental concepts and database selection
  • Part B (30 points): SQL skills and query optimization
  • Part C (25 points): Database administration and troubleshooting
  • Part D (20 points): Cloud migration and modern practices

Grading Criteria:

Excellent (90-100 points):

  • Demonstrates mastery of database concepts and practical application
  • Provides comprehensive solutions with business context
  • Shows understanding of modern architectures and best practices
  • Includes cost-benefit analysis and ROI considerations

Proficient (70-89 points):

  • Shows solid understanding of database fundamentals
  • Can write effective SQL queries and basic administration tasks
  • Understands cloud migration concepts
  • Provides workable solutions with minor gaps

Developing (50-69 points):

  • Basic understanding of database concepts
  • Can perform simple queries and administration tasks
  • Limited understanding of advanced topics
  • Solutions work but lack optimization or best practices

Needs Improvement (Below 50 points):

  • Fundamental gaps in database knowledge
  • Cannot complete basic SQL tasks effectively
  • Limited understanding of practical applications
  • Solutions are incomplete or incorrect

Practical Evaluation:

Candidates scoring 70% or higher demonstrate readiness to:

  • Manage database systems in MSP environments
  • Perform cloud database migrations
  • Implement basic DevOps practices for databases
  • Troubleshoot common database issues
  • Provide value-driven recommendations to clients

Post-Assessment Resources

For scores 70-79%: Review query optimization and cloud migration planning
For scores 60-69%: Focus on SQL fundamentals and basic administration
For scores below 60%: Repeat module sections and seek additional hands-on practice

Next Learning Path:

  • Module 6: Security Fundamentals
  • Advanced Database Specialization Track
  • Cloud Certification Preparation (AWS/Azure/GCP)

This assessment ensures you can confidently handle real-world database challenges that MSP professionals face daily while providing business value to clients.