If you work in DevOps and spend time in the terminal, knowing Unix commands isn’t optional. It’s part of the job.
Whether you're managing servers, setting up deployments, or fixing something that just broke in production, these commands help you move faster and work smarter.
This cheat sheet keeps things simple. No filler. Just the commands you’ll use when you’re in the middle of real work.
Quick Reference Table of Essential Unix Commands
Category | Command | What It Does |
---|---|---|
File Operations | ls |
Lists directory contents |
cp |
Copies files and directories | |
mv |
Moves/renames files and directories | |
rm |
Removes files and directories | |
touch |
Creates empty files or updates timestamps | |
Directory Management | pwd |
Shows current directory path |
mkdir |
Creates directories | |
rmdir |
Removes empty directories | |
cd |
Changes directory | |
System Information | uname |
Shows system information |
top |
Displays active processes | |
df |
Shows disk usage | |
free |
Shows memory usage | |
Text Processing | cat |
Concatenates and displays files |
grep |
Searches for patterns in files | |
sed |
Stream editor for text transformation | |
awk |
Text processing language | |
Networking | ping |
Tests network connectivity |
curl |
Transfers data from/to servers | |
wget |
Downloads files from the web | |
netstat |
Shows network statistics | |
Process Management | ps |
Shows process status |
kill |
Terminates processes | |
bg/fg |
Controls job execution | |
nohup |
Runs commands immune to hangups | |
Permissions | chmod |
Changes file permissions |
chown |
Changes file ownership | |
sudo |
Executes commands as superuser |
Essential File System Navigation Commands for Server Management
Using pwd
and cd
Commands to Navigate Linux Directory Structures
pwd
(Print Working Directory) displays your current location in the file system. Engineers rely on this command while managing multi-directory projects or when troubleshooting path-related issues in complex server environments.
It's particularly useful when working with configuration management tools like Ansible or Puppet, where knowing your exact location in the file system hierarchy is crucial.
cd
(Change Directory) is your main navigation tool in Unix environments. Use it with relative paths (cd ../config
) or absolute paths (cd /etc/nginx
) to move through the file system. Understanding the nuances of path traversal is essential when managing distributed applications with components spread across different directories.
# Print current directory
pwd
# Change to home directory
cd
# Navigate one level up
cd ..
# Go to specific directory
cd /etc/docker
Listing Directory Contents with the ls
Command
The ls
command reveals what's in a directory, serving as the eyes of SREs when navigating server environments. When troubleshooting production issues or verifying deployment artifacts, the ability to quickly assess directory contents becomes critical.
Engineers use various flags to customize output for different scenarios, tailoring the information display to specific operational needs:
# Basic listing
ls
# Detailed listing with file permissions, sizes, and dates
ls -l
# Show hidden files (starting with .)
ls -a
# Human-readable file sizes
ls -lh
# Sort by modification time (newest first)
ls -lt
Critical File Operations and Management Techniques for Infrastructure Configuration
Creating and Modifying Configuration Files with touch
and cat
Commands
touch
creates empty files or updates existing file timestamps—essential functionality when testing file-watching services, triggering time-based operations, or creating placeholder configuration files during infrastructure provisioning. In CI/CD pipelines, touch
can be used to create flag files that signal the completion of specific stages.
cat
(concatenate) displays file contents directly in the terminal, making it indispensable for developers/DevOps engineers who need to quickly verify configuration files, check logs, or debug application settings.
Beyond simple viewing, cat
becomes powerful when combined with redirection operators to create or append to files, especially during automated deployments where generating configuration files programmatically is necessary.
# Create an empty file
touch deployment.yaml
# View file contents
cat config.json
# Create file with content
cat > version.txt << EOF
app_version=1.2.3
build_date=$(date)
EOF
Managing Infrastructure Files: How to Copy, Move, and Remove Files Safely
cp
(copy), mv
(move), and rm
(remove) form the core triad of file manipulation commands in Unix environments. These commands are fundamental to infrastructure management, allowing you to maintain configuration files, deploy application artifacts, and manage system resources.
Understanding their various options and potential risks is critical for maintaining system integrity during operations:
# Copy file
cp source.conf target.conf
# Copy directory recursively
cp -r /config/templates/ /backups/
# Move/rename file
mv old_name.sh new_name.sh
# Remove file
rm log.txt
# Remove directory and contents (use with caution)
rm -rf temp_build/
Advanced Text Processing Commands for Log Analysis and Troubleshooting
Finding Critical Patterns in Logs and Configuration Files with grep
grep
(Global Regular Expression Print) searches for text patterns within files—serving as an indispensable tool for those engaged in log analysis, configuration verification, security audits, and code reviews.
When troubleshooting production issues or verifying deployment configurations, the ability to quickly find specific patterns across multiple files can dramatically reduce incident response times:
# Search for pattern in file
grep "error" application.log
# Case-insensitive search
grep -i "warning" system.log
# Show line numbers
grep -n "deprecated" *.js
# Recursive search in directories
grep -r "api_key" /etc/configs/
Automating Text Transformation in Configuration Files with sed
and awk
Tools
sed
(Stream EDitor) performs sophisticated text transformations without opening files in an editor, making it essential for automation in DevOps workflows.
Infrastructure as Code (IaC) practices rely heavily on sed
for modifying configuration templates, updating version numbers, changing environment variables, and other text-based transformations that need to happen programmatically during CI/CD processes:
# Replace text in file
sed -i 's/old_version/new_version/g' config.yaml
# Delete lines containing pattern
sed '/DEBUG/d' production.log
awk
functions as a complete text processing language, handling complex data manipulation tasks that would be cumbersome with simpler tools.
You can rely on awk
for parsing structured logs, generating reports from command outputs, and transforming data formats during ETL (Extract, Transform, Load) operations in data pipelines:
# Print specific columns from output
ps aux | awk '{print $2, $11}'
# Sum values in a column
cat metrics.txt | awk '{sum+=$3} END {print sum}'
Critical Process Management Techniques for Application Monitoring
Viewing and Analyzing Running Processes for Performance Optimization
ps
(Process Status) provides detailed information about running processes, serving as a fundamental diagnostic tool for monitoring application behavior, identifying resource bottlenecks, troubleshooting hung services, and verifying system health.
In containerized environments and microservice architectures, understanding process relationships becomes even more crucial:
# Show all processes
ps aux
# Filter processes by name
ps aux | grep nginx
# Show process tree
ps -ejH
top
provides a real-time, interactive view of system processes, making it invaluable for performance monitoring, capacity planning, and identifying resource-intensive applications.
Unlike static commands, top
continuously updates, giving you a dynamic window into system behavior during load tests, deployments, or incident response scenarios:
# Launch top
top
# Sort by memory usage (press M when running)
# Sort by CPU usage (press P when running)
Controlling and Managing Application Processes During Deployments and Maintenance
kill
terminates processes when needed during deployments, service restarts, or when handling runaway processes that could impact system stability.
Understanding the different signal types allows you to gracefully shut down applications or forcefully terminate them depending on operational requirements:
# Kill process by PID
kill 1234
# Force kill (when process is unresponsive)
kill -9 1234
# Kill processes by name
pkill nginx
Background process control with jobs
, bg
, and fg
helps manage multiple concurrent tasks in terminal sessions—essential when juggling several operational activities simultaneously, such as running database migrations while monitoring log files or executing long-running scripts without maintaining an active terminal connection:
# Run command in background
long_running_script.sh &
# List background jobs
jobs
# Move job to background
bg %1
# Bring job to foreground
fg %1
System Information and Resource Monitoring for Infrastructure Health
Analyzing Disk and Memory Usage to Prevent Service Outages
df
(Disk Free) reports file system disk space usage—information that becomes critical before large deployments, database operations, log rotations, or when troubleshooting storage-related performance issues. In production environments, insufficient disk space often leads to cascading failures across services:
# Show disk usage in human-readable format
df -h
# Show file system type
df -T
du
(Disk Usage) provides detailed analysis of directory sizes, helping engineers identify which components of an application or system are consuming disproportionate amounts of storage.
This becomes particularly valuable when optimizing container images, cleaning up artifact repositories, or troubleshooting unexpected storage consumption patterns:
# Check directory size
du -sh /var/log
# Find largest directories (top 5)
du -h /var | sort -hr | head -5
Memory monitoring with free
provides insight into system RAM usage and swap space activity, helping you identify potential resource constraints, memory leaks, or configuration issues that could affect application performance. In containerized environments with resource limits, understanding memory utilization becomes even more critical:
# Display memory usage in human-readable format
free -h
# Update every 3 seconds
free -h -s 3
Networking Commands for Connectivity Troubleshooting
Fundamental Network Diagnostics for Identifying Connection Issues
ping
tests basic connectivity to remote hosts by sending ICMP echo requests—often the first diagnostic step in network troubleshooting. This fundamental tool helps you to verify network paths, measure latency, detect packet loss, and diagnose connectivity issues between services in distributed architectures:
# Check if server is reachable
ping -c 4 api.example.com
netstat
examines active network connections and listening ports, providing critical visibility into service availability, connection states, and potential security issues. For microservice architectures and containerized applications where numerous network interactions occur, netstat
helps verify proper service discovery and communication patterns:
# Show all listening ports
netstat -tulpn
# Check established connections
netstat -an | grep ESTABLISHED
Managing File Transfers and API Interactions in Distributed Systems
curl
functions as a versatile HTTP client, transferring data to or from servers with support for numerous protocols and authentication methods. Engineers rely on curl
for API testing, health checks, webhook triggers, and automated interactions with web services.
Its flexible options make it indispensable for troubleshooting service integrations and validating REST endpoints:
# GET request
curl https://api.example.com/status
# POST request with data
curl -X POST -d '{"key":"value"}' -H "Content-Type: application/json" https://api.example.com/update
wget
specializes in non-interactive file downloads with robust handling of poor network conditions, making it ideal for unattended operations in automation scripts. You can use it extensively in CI/CD pipelines to retrieve artifacts, download installation packages, or mirror documentation:
# Download file
wget https://example.com/package.tar.gz
# Mirror website content
wget -r -np -k https://docs.example.com/
User and Permission Management for Secure Infrastructure
Configuring File Permissions to Maintain System Security
chmod
(Change Mode) modifies file permissions based on a numeric or symbolic notation system, providing granular control over who can read, write, or execute files. This capability is essential for maintaining security posture, ensuring proper application execution, and preventing unauthorized access to sensitive configuration files or credentials:
# Make script executable
chmod +x deploy.sh
# Set specific permissions (read/write for owner, read for others)
chmod 644 config.json
# Recursively set directory permissions
chmod -R 755 /app/scripts/
chown
(Change Owner) modifies file ownership attributes, transferring ownership between users and groups. In multi-user environments or with services running as different system users, proper ownership configuration ensures that processes have appropriate access to the files they need while maintaining the principle of least privilege:
# Change file owner
chown user:group file.txt
# Recursively change ownership
chown -R www-data:www-data /var/www/
Running Administrative Commands as Different Users with Proper Privilege Management
sudo
(Superuser Do) executes commands with elevated privileges based on predefined security policies, allowing controlled access to administrative functions without exposing the root password.
This mechanism is fundamental to maintaining proper security practices while still enabling you to perform necessary system operations:
# Run command as root
sudo systemctl restart nginx
# Edit protected file
sudo vim /etc/hosts
# Run as specific user
sudo -u postgres psql
Advanced Command Chaining Techniques for Workflow Automation
Using Pipelines and Redirections to Build Complex Command Sequences
Pipelines (|
) connect the output of one command to the input of another, creating powerful data processing chains that transform, filter, and analyze information in a single command sequence.
This fundamental Unix philosophy of composable tools enables you to build complex operations from simple building blocks:
# Find largest log files
find /var/log -type f -name "*.log" | xargs du -h | sort -hr | head -5
# Count occurrences of errors in logs
grep -i error /var/log/application.log | wc -l
Input/output redirection operators (>
, >>
, <
) control the flow of data between commands, files, and standard streams, providing fine-grained control over how information moves through the system. These operators form the foundation of automation scripts where capturing output or feeding input from files is essential:
# Save command output to file
ls -la > directory_contents.txt
# Append output to existing file
echo "Deployment successful" >> deploy_history.log
# Use file as input
sort < unsorted_list.txt
Controlling Command Execution Flow in Deployment Scripts and Automation
The conditional execution operator &&
creates dependency chains where subsequent commands run only if previous ones succeed (exit with status code 0). This logic flow control is critical in deployment scripts, database migrations, and other operations where proceeding after a failure could cause system corruption or data loss:
# Only deploy if tests pass
npm test && docker build -t myapp:latest . && docker push myapp:latest
The command separator ;
executes commands in sequential order regardless of whether previous commands succeed or fail, creating a simple batch-like execution flow. This approach is useful when commands are independent of each other or when you want to ensure all cleanup operations run even after encountering errors:
# Run multiple commands in sequence
cd /app; git pull; npm install; npm run build
Conclusion
This comprehensive unix commands cheat sheet covers the essential toolset you need for effective infrastructure management and operation.