Proper logging isn't just nice to have – it's your secret weapon when things go sideways.
In this guide, we'll talk about everything you need to know about Java logging frameworks and logging libraries that help when you need it most.
What Makes Java Logging Different?
Java logging stands out because of its maturity and flexibility. Unlike logging in some other languages, Java offers multiple logging libraries that can work together or independently.
The Java ecosystem gives you options: the built-in java.util.logging (JUL) from the Java logging API, the popular Log4j, SLF4J, Apache Commons Logging, and ch.qos.logback. Each framework in these java logging frameworks has strengths depending on your project needs.
Java Util Logging (JUL)
Built into the JDK since version 1.4, this logging package requires no external dependencies:
import java.util.logging.Logger;
import java.util.logging.Level;
import java.util.logging.FileHandler;
import java.util.logging.ConsoleHandler;
import java.util.logging.SimpleFormatter;
public class JULExample {
private static final Logger logger = Logger.getLogger(JULExample.class.getName());
public static void main(String[] args) {
try {
// Create FileHandler and ConsoleHandler
FileHandler fileHandler = new FileHandler("application.log");
fileHandler.setFormatter(new SimpleFormatter());
ConsoleHandler consoleHandler = new ConsoleHandler();
// Add handlers to the logger
logger.addHandler(fileHandler);
logger.addHandler(consoleHandler);
// Remove default handlers
logger.setUseParentHandlers(false);
new JULExample().doSomething();
} catch (Exception e) {
e.printStackTrace();
}
}
public void doSomething() {
logger.info("Starting process");
try {
// Business logic here
logger.fine("Process details: step 1 completed");
} catch (Exception e) {
logger.log(Level.SEVERE, "Process failed", e);
}
}
}
Pros of JUL:
- No external dependencies
- Always available in any Java environment
- Java EE container integration
- Built-in support for localization
Cons of JUL:
- Less flexible configuration
- Fewer handlers compared to other logging libraries
- Performance not as strong as alternatives
- Verbose syntax for handler configuration
Log4j 2
The Apache Log4j 2 framework offers better performance and more features than its predecessor. Its configuration is typically stored in an XML file:
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.Logger;
public class Log4jExample {
private static final Logger logger = LogManager.getLogger(Log4jExample.class);
public static void main(String[] args) {
new Log4jExample().processOrder(new Order("12345"));
}
public void processOrder(Order order) {
logger.debug("Processing order: {}", order.getId());
try {
// Processing code
logger.info("Order {} processed successfully", order.getId());
} catch (Exception e) {
logger.error("Failed to process order: {}", order.getId(), e);
}
}
}
class Order {
private String id;
public Order(String id) {
this.id = id;
}
public String getId() {
return id;
}
}
Pros of Log4j 2:
- Garbage-free logging (crucial for high-performance apps)
- Automatic reloading of configuration
- Extensive filtering options
- Asynchronous loggers
SLF4J + Logback
SLF4J acts as a facade, allowing you to switch logging implementations without changing code. It works well with ch.qos.logback:
import org.slf4j.Logger;
import org.slf4j.LoggerFactory;
public class SLF4JExample {
// Using the logger class through SLF4J interface
private static final Logger logger = LoggerFactory.getLogger(SLF4JExample.class);
public static void main(String[] args) {
new SLF4JExample().authenticateUser("johndoe");
}
public void authenticateUser(String username) {
logger.info("Authentication attempt for user: {}", username);
// Note: The message isn't constructed if debug is disabled
logger.debug("Auth details: method={}, IP={}", () -> getAuthMethod(), () -> getClientIP());
boolean authFailed = false; // Simulated authentication check
if (authFailed) {
logger.warn("Authentication failed for user: {}", username);
}
}
private String getAuthMethod() {
return "OAuth";
}
private String getClientIP() {
return "192.168.1.1";
}
}
Pros of SLF4J + Logback:
- Great performance characteristics
- Native support for parameterized logging
- Flexible configuration via XML, Groovy, or programmatically
- Automatic compression of archived logs
- ch.qos.logback has excellent integration with Spring Boot
- Powerful logging methods that improve efficiency
Common Java Log Problems (And How to Fix Them)
Missing Log Output
You've added logging statements, but nothing shows up. What gives?
Configuration Issues
Problem 1: Missing Configuration File
Log4j2 looks for configuration in these locations, in order:
log4j2-test.properties
orlog4j2-test.yaml
in classpathlog4j2.properties
orlog4j2.yaml
in classpath- Falls back to default configuration (console only)
Solution: Place your configuration file in src/main/resources
for Maven/Gradle projects. You can also find many configuration examples from other developers on GitHub repositories.
Minimal log4j2.properties
example:
rootLogger.level = info
rootLogger.appenderRefs = stdout
rootLogger.appenderRef.stdout.ref = STDOUT
appender.console.type = Console
appender.console.name = STDOUT
appender.console.layout.type = PatternLayout
appender.console.layout.pattern = %d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n
Or as an xml file (which many developers prefer):
<?xml version="1.0" encoding="UTF-8"?>
<Configuration status="WARN">
<Appenders>
<Console name="Console" target="SYSTEM_OUT">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</Console>
</Appenders>
<Loggers>
<Root level="info">
<AppenderRef ref="Console"/>
</Root>
</Loggers>
</Configuration>
Problem 2: Incorrect Log Level
The hierarchy of log levels (from most to least verbose):
- TRACE
- DEBUG
- INFO
- WARN
- ERROR
- FATAL
If your logger level is set to ERROR, but you're using logger.debug()
, those messages will be filtered out.
Solution: Check and adjust the level in your configuration file. During development, set it to DEBUG:
<!-- For Log4j2 XML config -->
<Loggers>
<Root level="debug">
<AppenderRef ref="Console"/>
</Root>
<!-- You can set different levels for specific packages -->
<Logger name="org.hibernate" level="warn"/>
</Loggers>
Problem 3: Logger Name Mismatch
If you're using a logger with name "com.example.MyClass" but your configuration only enables "com.example", messages may be filtered out.
Solution: Check your logger hierarchy or use <Root>
logger to catch all:
<Logger name="com.example" level="debug"/>
System Issues
Problem 1: File Permission Errors
If logging into a file, the Java process might lack permission to write to the specified location.
Solution:
- Check file permissions
- Use relative paths that are accessible to the Java process
- Check system logs for permission errors
- Use try-catch around file operations with detailed error logging
Problem 2: Disk Space Limitations
Logging might silently fail if the disk is full.
Solution:
- Implement disk space checking before logging
- Setup system monitoring for disk space
- Configure log rotation policies:
<RollingFile name="RollingFile" fileName="logs/app.log"
filePattern="logs/app-%d{MM-dd-yyyy}-%i.log.gz">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
<Policies>
<SizeBasedTriggeringPolicy size="10 MB"/>
<TimeBasedTriggeringPolicy />
</Policies>
<DefaultRolloverStrategy max="20">
<Delete basePath="logs" maxDepth="1">
<IfFileName glob="*.log.gz" />
<IfLastModified age="7d" />
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
Performance Bottlenecks
Your app slows to a crawl because of excessive logging. Here's how to fix various performance issues:
String Concatenation Overhead
Problem: String concatenation happens even when the log level means the message won't be printed.
// BAD: Always concatenates strings regardless of level
logger.debug("User data: " + userObject.toString() + " for session: " + sessionId);
Solution: Use parameterized logging or lazy evaluation:
// GOOD: Parameters are only processed if debug is enabled
logger.debug("User data: {} for session: {}", userObject, sessionId);
// BETTER with SLF4J and expensive operations:
logger.debug("Detailed calculations: {}", () -> expensiveCalculation());
I/O Bottlenecks
Problem: Synchronous logging to disk can block application threads.
Solution: Use asynchronous appenders:
<Appenders>
<!-- Define the file appender first -->
<File name="File" fileName="logs/app.log">
<PatternLayout pattern="%d{HH:mm:ss.SSS} [%t] %-5level %logger{36} - %msg%n"/>
</File>
<!-- Wrap it with an Async appender -->
<Async name="AsyncFile">
<AppenderRef ref="File"/>
<BufferSize>512</BufferSize>
<DiscardThreshold>INFO</DiscardThreshold>
</Async>
</Appenders>
<Loggers>
<Root level="debug">
<AppenderRef ref="AsyncFile"/>
</Root>
</Loggers>
For Log4j2, consider using the LMAX Disruptor for even better async performance:
<Configuration status="WARN">
<Properties>
<Property name="log-path">logs</Property>
</Properties>
<!-- Add this to enable async loggers globally -->
<AsyncLogger name="com.example" level="debug" includeLocation="false">
<AppenderRef ref="Console"/>
<AppenderRef ref="File"/>
</AsyncLogger>
</Configuration>
This xml file configuration demonstrates one of the more advanced features in modern java logging frameworks - asynchronous logging, which dramatically improves performance.
Add this dependency:
<dependency>
<groupId>com.lmax</groupId>
<artifactId>disruptor</artifactId>
<version>3.4.4</version>
</dependency>
And this JVM parameter:
-Dlog4j2.contextSelector=org.apache.logging.log4j.core.async.AsyncLoggerContextSelector
Memory Issues
Problem: Excessive logging creates millions of temporary objects that trigger frequent garbage collection.
Solution: Enable garbage-free logging in Log4j2:
<Configuration status="WARN" shutdownHook="disable">
<Properties>
<Property name="gc.free">true</Property>
</Properties>
<!-- Rest of configuration -->
</Configuration>
And in your code:
// Use StringBuilder parameters (reused from a thread-local pool)
logger.debug("Complex {} with many {} to avoid {}",
"message", "parameters", "object creation");
For even more advanced memory management, look at examples in the ch.qos.logback codebase on Github where they've optimized various logging methods to minimize memory overhead.
Log File Management Issues
Managing log files becomes critical as applications grow.
Log Rotation Strategies
Problem: Logs grow indefinitely and consume all disk space.
Solution: Implement a comprehensive rotation strategy:
<RollingFile name="RollingFile" fileName="${log-path}/app.log"
filePattern="${log-path}/archive/app.%d{yyyy-MM-dd}-%i.log.gz">
<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss} %-5p %c{1}:%L - %m%n"/>
<Policies>
<!-- Roll over at midnight each day -->
<TimeBasedTriggeringPolicy interval="1"/>
<!-- OR when file size reaches threshold -->
<SizeBasedTriggeringPolicy size="10 MB"/>
</Policies>
<!-- Keep 30 days worth of logs -->
<DefaultRolloverStrategy>
<Delete basePath="${log-path}/archive">
<IfFileName glob="*.log.gz">
<IfLastModified age="30d"/>
</IfFileName>
</Delete>
</DefaultRolloverStrategy>
</RollingFile>
Managing Multi-Environment Logging
Problem: Different environments need different logging configurations.
Solution: Use Spring profiles or system properties to load different configurations:
// Set up Log4j2 configuration based on environment
System.setProperty("log4j.configurationFile",
System.getProperty("env", "dev") + "/log4j2.xml");
Or with Spring Boot:
# application-dev.properties
logging.level.root=DEBUG
logging.level.org.springframework=INFO
logging.file.name=logs/application-dev.log
# application-prod.properties
logging.level.root=WARN
logging.level.com.yourapp=INFO
logging.file.name=/var/log/yourapp/application.log
Java Log Best Practices
1. Choose the Right Log Level
Think of log levels as a conversation with different audiences:
Log Level | When to Use | Example | Impact/Audience |
---|---|---|---|
ERROR | Something broke that needs fixing | "Database connection failed: Connection refused" | Triggers alerts, seen by operations |
WARN | Something unusual that might cause problems | "API call retry 3/5: timeout after 5s" | Monitored in dashboards |
INFO | Normal operations worth tracking | "Order #12345 processed successfully: $123.45" | Business metrics, app health |
DEBUG | Details useful during development | "Processing item #1234 with attributes: {color=red, size=medium}" | Developers for troubleshooting |
TRACE | Very detailed info for tracking code flow | "Entered method calculateTotal() with parameters: [1, 2, 3]" | Deep debugging sessions |
Level Selection Guidelines:
- For Production:
- ROOT level: WARN or INFO
- Your application packages: INFO
- Noisy third-party libraries: WARN
- For Development:
- ROOT level: INFO
- Your application packages: DEBUG
- Focused troubleshooting packages: TRACE
Example configuration with proper level separation:
<Loggers>
<!-- Root logger sets the baseline -->
<Root level="warn">
<AppenderRef ref="Console"/>
<AppenderRef ref="File"/>
</Root>
<!-- Your application gets more detailed logging -->
<Logger name="com.yourcompany" level="info" additivity="false">
<AppenderRef ref="Console"/>
<AppenderRef ref="File"/>
</Logger>
<!-- Specific troubleshooting area gets full detail -->
<Logger name="com.yourcompany.payments" level="debug" additivity="false">
<AppenderRef ref="Console"/>
<AppenderRef ref="PaymentLogFile"/>
</Logger>
<!-- Third-party libraries get minimal logging -->
<Logger name="org.hibernate" level="warn"/>
<Logger name="org.springframework" level="warn"/>
</Loggers>
2. Structure Your Log Messages
Bad vs. Good Log Messages
Bad:
"Error"
"Process failed"
"Could not complete request"
Good:
"Payment processing failed: Invalid credit card expiration date [CARD_ID=1234, ORDER_ID=5678, USER_ID=9012]"
"Database connection failed after 5 retry attempts [DB=orders_db, HOST=db-03.example.com, PORT=5432]"
"User password reset request rejected: Account locked [USER_ID=1234, IP=192.168.1.1, ATTEMPT=3]"
Structured Logging Pattern
Always include:
- What happened - Clear description of the event
- Why it happened - Error reason or condition
- Context data - IDs, timestamps, system state
JSON Logging Format
For machine processing, JSON format is often better:
<dependencies>
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-layout-template-json</artifactId>
<version>2.14.1</version>
</dependency>
</dependencies>
Configuration:
<JsonTemplateLayout eventTemplateUri="classpath:LogstashJsonEventLayoutV1.json">
<EventTemplateAdditionalField key="app_name" value="inventory-service"/>
<EventTemplateAdditionalField key="environment" value="${sys:env.name:-dev}"/>
</JsonTemplateLayout>
Sample output:
{
"timestamp": "2023-07-22T14:32:51.253+02:00",
"level": "ERROR",
"thread": "main",
"logger": "com.example.OrderService",
"message": "Payment processing failed",
"exception": {
"class": "java.io.IOException",
"message": "Connection reset",
"stacktrace": "..."
},
"context": {
"orderId": "ORD-12345",
"userId": "USR-6789",
"amount": 129.99
},
"app_name": "order-service",
"environment": "staging"
}
3. Use Contextual Logging
Add context that helps troubleshooting:
Thread Context with MDC
The Mapped Diagnostic Context (MDC) allows you to attach contextual data to log messages across method calls:
import org.slf4j.MDC;
@WebFilter("/*")
public class RequestContextFilter implements Filter {
@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain)
throws IOException, ServletException {
HttpServletRequest req = (HttpServletRequest) request;
String requestId = req.getHeader("X-Request-ID");
if (requestId == null) {
requestId = UUID.randomUUID().toString();
}
String userId = getUserId(req); // Your authentication logic
try {
// Add context available to all log statements in this thread
MDC.put("requestId", requestId);
MDC.put("userId", userId);
MDC.put("ip", req.getRemoteAddr());
MDC.put("userAgent", req.getHeader("User-Agent"));
// Continue with request processing
chain.doFilter(request, response);
} finally {
// Always clean up MDC to prevent leaks in thread pools
MDC.clear();
}
}
}
Pattern layout including MDC variables:
<PatternLayout pattern="%d{yyyy-MM-dd HH:mm:ss.SSS} [%thread] %-5level %logger{36} [reqId=%X{requestId}, user=%X{userId}, ip=%X{ip}] - %msg%n"/>
Output example:
2023-07-22 14:35:12.123 [http-nio-8080-exec-3] INFO c.e.UserController [reqId=550e8400-e29b-41d4-a716-446655440000, user=john.doe, ip=192.168.1.100] - User profile updated successfully
Nested Diagnostic Context (NDC)
For nested execution contexts like recursive calls:
import org.apache.logging.log4j.ThreadContext; // For Log4j2
public void processNode(TreeNode node, int depth) {
ThreadContext.push("node" + node.getId() + "[depth=" + depth + "]");
try {
logger.debug("Processing node");
// Process children recursively
for (TreeNode child : node.getChildren()) {
processNode(child, depth + 1);
}
logger.debug("Node processing complete");
} finally {
ThreadContext.pop();
}
}
Pattern using NDC:
<PatternLayout pattern="%d %-5p [%c{1}] %X{requestId} %x - %m%n"/>
Output:
2023-07-22 14:36:23 DEBUG [TreeProcessor] req-123 node1[depth=0] node2[depth=1] - Processing node
Advanced Java Log Techniques
Centralized Logging
As your app scales across multiple servers, centralized logging becomes essential.
ELK Stack Integration
Elasticsearch, Logstash, and Kibana provide a powerful stack for log aggregation:
- Configure Logstash Output:
<dependency>
<groupId>org.apache.logging.log4j</groupId>
<artifactId>log4j-to-slf4j</artifactId>
<version>2.14.1</version>
</dependency>
<dependency>
<groupId>net.logstash.logback</groupId>
<artifactId>logstash-logback-encoder</artifactId>
<version>6.6</version>
</dependency>
- Logback Configuration:
<appender name="LOGSTASH" class="net.logstash.logback.appender.LogstashTcpSocketAppender">
<destination>logstash-server:5000</destination>
<encoder class="net.logstash.logback.encoder.LogstashEncoder">
<includeMdc>true</includeMdc>
<customFields>{"app_name":"user-service","environment":"${ENV:-dev}"}</customFields>
</encoder>
<keepAliveDuration>5 minutes</keepAliveDuration>
</appender>
<root level="INFO">
<appender-ref ref="LOGSTASH" />
</root>
Last9 Cloud Integration
For a managed logging solution with Last9:
- Add Last9 Agent:
<dependency>
<groupId>com.last9</groupId>
<artifactId>java-agent</artifactId>
<version>1.2.3</version>
</dependency>
- Configure in your application.properties:
last9.api.key=your-api-key
last9.service.name=${spring.application.name}
last9.environment=${spring.profiles.active}
last9.log.forwarding.enabled=true
- Or use JVM parameters:
-javaagent:/path/to/last9-agent.jar
-Dlast9.api.key=your-api-key
-Dlast9.service.name=order-service
-Dlast9.environment=production
Log Correlation
For microservices, tracking requests across systems requires correlation IDs.
Spring Cloud Sleuth Integration
Add automatic tracing with minimal code:
<dependency>
<groupId>org.springframework.cloud</groupId>
<artifactId>spring-cloud-starter-sleuth</artifactId>
</dependency>
Sleuth automatically adds traceId
and spanId
to logs:
2023-07-22 14:42:12.123 [order-service,5745aa8feb3cb1ec,9e53b35d7e828e81] INFO OrderController - Processing order
Manual Trace Propagation
For non-Spring applications:
public class TraceFilter implements Filter {
@Override
public void doFilter(ServletRequest request, ServletResponse response, FilterChain chain) {
HttpServletRequest req = (HttpServletRequest) request;
HttpServletResponse res = (HttpServletResponse) response;
String traceId = req.getHeader("X-Trace-ID");
if (traceId == null) {
traceId = UUID.randomUUID().toString();
}
MDC.put("traceId", traceId);
try {
// Add the trace ID to outgoing responses
res.setHeader("X-Trace-ID", traceId);
// Add it to outgoing requests too
registerTraceInterceptor(traceId);
chain.doFilter(request, response);
} finally {
MDC.remove("traceId");
}
}
private void registerTraceInterceptor(String traceId) {
// For OkHttp
OkHttpClient client = new OkHttpClient.Builder()
.addInterceptor(chain -> {
Request originalRequest = chain.request();
Request requestWithTrace = originalRequest.newBuilder()
.header("X-Trace-ID", traceId)
.build();
return chain.proceed(requestWithTrace);
})
.build();
// Make the client available for service calls
HttpClientRegistry.set(client);
}
}
Custom Logging Frameworks
When standard frameworks don't meet your needs:
Domain-Specific Logger
Create loggers with methods specific to your domain:
public class PaymentLogger {
private final Logger logger;
public PaymentLogger(Class<?> clazz) {
this.logger = LoggerFactory.getLogger(clazz);
}
public void paymentInitiated(String paymentId, double amount, String currency, String customerId) {
logger.info("PAYMENT_INITIATED: id={}, amount={}, currency={}, customer={}",
paymentId, amount, currency, customerId);
// Could also send to a payment analytics system
PaymentMetrics.recordPaymentAttempt(amount, currency);
}
public void paymentSuccess(String paymentId, String transactionId, long processingTimeMs) {
logger.info("PAYMENT_SUCCESS: id={}, txn={}, processingTime={}ms",
paymentId, transactionId, processingTimeMs);
PaymentMetrics.recordPaymentSuccess(processingTimeMs);
}
public void paymentFailed(String paymentId, String reason, String errorCode) {
logger.error("PAYMENT_FAILED: id={}, reason={}, errorCode={}",
paymentId, reason, errorCode);
PaymentMetrics.recordPaymentFailure(errorCode);
// For critical payment errors, could trigger alerts
if (errorCode.startsWith("CRIT_")) {
AlertSystem.triggerAlert("Payment processing critical failure: " + reason);
}
}
}
Usage:
public class PaymentService {
private static final PaymentLogger logger = new PaymentLogger(PaymentService.class);
public void processPayment(Payment payment) {
logger.paymentInitiated(payment.getId(), payment.getAmount(),
payment.getCurrency(), payment.getCustomerId());
try {
// Payment processing logic
String txnId = paymentGateway.process(payment);
long processingTime = System.currentTimeMillis() - startTime;
logger.paymentSuccess(payment.getId(), txnId, processingTime);
} catch (PaymentException e) {
logger.paymentFailed(payment.getId(), e.getMessage(), e.getErrorCode());
}
}
}
Monitoring Your Java Logs
Setting up proper monitoring prevents issues from going unnoticed.
Real-time Alerts
Elastic Stack Alerting
With Elasticsearch and Kibana:
- Create a watcher in Elasticsearch:
{
"trigger": {
"schedule": {
"interval": "5m"
}
},
"input": {
"search": {
"request": {
"indices": ["logs-*"],
"body": {
"query": {
"bool": {
"must": [
{ "match": { "level": "ERROR" } },
{ "match": { "message": "payment" } },
{ "range": { "@timestamp": { "gte": "now-5m" } } }
]
}
}
}
}
}
},
"condition": {
"compare": { "ctx.payload.hits.total": { "gt": 5 } }
},
"actions": {
"email_admin": {
"email": {
"to": "admin@example.com",
"subject": "Payment Error Alert",
"body": "More than 5 payment errors in the last 5 minutes"
}
}
}
}
- Or use Kibana Alerting:
- Create rules based on thresholds
- Monitor for specific log patterns
- Integrate with PagerDuty, Slack, etc.
Log Analysis Dashboards
Last9 offers comprehensive observability solutions that integrate well with popular tools like Grafana and Kibana.
With Last9, you can monitor, troubleshoot, and gain insights from your logs, making it easier to identify and resolve issues.
Kibana Dashboard
- Error Rate Panel:
- Count of errors over time
- Breakdown by service and error type to quickly pinpoint problem areas
- Response Time Panel:
- 95th percentile service response times to highlight performance outliers
- Slow endpoint identification to help optimize system performance
- User Activity Panel:
- Track login success/failure rates for authentication monitoring
- Authentication attack detection to quickly flag suspicious activity
Grafana + Loki for Kubernetes Environments
For Kubernetes environments, using Grafana with Loki offers a lightweight and efficient way to handle logs. Here's how to configure it:
Configure Log Collection:
loki:
enabled: true
persistence:
enabled: true
size: 10Gi
promtail:
enabled: true
config:
snippets:
extraScrapeConfigs: |
- job_name: java-app-logs
kubernetes_sd_configs:
- role: pod
relabel_configs:
- source_labels: [__meta_kubernetes_pod_label_app]
regex: java-app
action: keep
Create Log Dashboards
- Combine Metrics + Logs: Bring both application metrics and log data into a single view for a holistic look at system performance.
- Alert Rules: Set up alert rules based on log patterns, so you're notified instantly when something goes wrong.
- Correlate Application Performance with Log Events: Correlate system performance metrics with the relevant log events to get to the root cause faster.
With Last9, you can integrate all of this into your observability pipeline, allowing you to efficiently monitor and troubleshoot your system’s performance.
Advanced Troubleshooting
Handling Multi-threaded Applications
Thread pools create logging challenges in any Java application—make thread names informative:
import java.util.concurrent.*;
public class CustomThreadPool {
public static void main(String[] args) {
ThreadPoolExecutor executor = new ThreadPoolExecutor(
10, 20, 60, TimeUnit.SECONDS,
new LinkedBlockingQueue<>(100),
r -> {
Thread t = new Thread(r);
t.setName("task-processor-" + t.getId());
return t;
}
);
// Use the executor
executor.submit(() -> {
// Task code here
});
}
}
Use thread IDs in log patterns:
<PatternLayout pattern="%d [%t] %-5level %logger{36} - %msg%n"/>
Detecting and Solving Memory Leaks
Logging can help detect memory issues:
import org.springframework.scheduling.annotation.Scheduled;
public class MemoryMonitor {
private static final Logger logger = LoggerFactory.getLogger(MemoryMonitor.class);
@Scheduled(fixedRate = 60000)
public void logMemoryUsage() {
Runtime runtime = Runtime.getRuntime();
long totalMemory = runtime.totalMemory() / (1024 * 1024);
long freeMemory = runtime.freeMemory() / (1024 * 1024);
long usedMemory = totalMemory - freeMemory;
logger.info("Memory usage: used={}MB, free={}MB, total={}MB, max={}MB",
usedMemory, freeMemory, totalMemory, runtime.maxMemory() / (1024 * 1024));
// Add critical warning if memory is running low
if (freeMemory < 100) { // Less than 100MB free
logger.warn("Memory running low! Consider restarting service");
}
}
public static void main(String[] args) {
// Example setup code
public static void setupMemoryMonitoring() {
new MemoryMonitor().logMemoryUsage();
}
}
}
Handling Log Configuration Changes Without Restart
With Log4j2, you can make configuration changes that take effect without application restarts:
<Configuration status="warn" monitorInterval="30">
<!-- Configuration checked every 30 seconds for changes -->
</Configuration>
For programmatic configuration changes:
import org.apache.logging.log4j.Level;
import org.apache.logging.log4j.LogManager;
import org.apache.logging.log4j.core.LoggerContext;
import org.apache.logging.log4j.core.config.Configuration;
import org.apache.logging.log4j.core.config.LoggerConfig;
public class DynamicLogLevelChanger {
public static void main(String[] args) {
changeLogLevel("com.example", Level.DEBUG);
}
public static void changeLogLevel(String loggerName, Level level) {
LoggerContext context = (LoggerContext) LogManager.getContext(false);
Configuration config = context.getConfiguration();
// Change log levels dynamically
LoggerConfig loggerConfig = config.getLoggerConfig(loggerName);
loggerConfig.setLevel(level);
// Apply changes
context.updateLoggers();
System.out.println("Log level for " + loggerName + " changed to " + level);
}
}
Conclusion
Remember these key principles:
- Choose the right log levels for each environment
- Structure your log messages with what, why, and context
- Use correlation IDs to track requests across services
- Configure proper log rotation and archiving
- Implement centralized logging for distributed systems
- Create alerts for critical error patterns
- Balance verbosity with performance needs