Depending on preference, you must choose to use either log4j2 or logback as your log provider. If you use the AspectJ annotation approach, you must configure aspectj to weave the code and make sure the annotation is processed. If you prefer the functional approach, AspectJ configuration is not required.
<dependencies>... <dependency><groupId>software.amazon.lambda</groupId><artifactId>powertools-logging-log4j</artifactId><version>2.8.0</version></dependency><dependency><groupId>software.amazon.lambda</groupId><artifactId>powertools-logging</artifactId><version>2.8.0</version></dependency>... </dependencies> ... <!-- configure the aspectj-maven-plugin to compile-time weave (CTW) the aws-lambda-powertools-java aspects into your project --><!-- Note: This AspectJ configuration is not needed when using the functional approach --><build><plugins>... <plugin><groupId>dev.aspectj</groupId><artifactId>aspectj-maven-plugin</artifactId><version>1.14</version><configuration><source>11</source><!-- or higher --><target>11</target><!-- or higher --><complianceLevel>11</complianceLevel><!-- or higher --><aspectLibraries><aspectLibrary><groupId>software.amazon.lambda</groupId><artifactId>powertools-logging</artifactId></aspectLibrary></aspectLibraries></configuration><dependencies><dependency><groupId>org.aspectj</groupId><artifactId>aspectjtools</artifactId><!-- AspectJ compiler version, in sync with runtime --><version>1.9.22</version></dependency></dependencies><executions><execution><goals><goal>compile</goal></goals></execution></executions></plugin>... </plugins></build>
<dependencies>... <dependency><groupId>software.amazon.lambda</groupId><artifactId>powertools-logging-logback</artifactId><version>2.8.0</version></dependency><dependency><groupId>software.amazon.lambda</groupId><artifactId>powertools-logging</artifactId><version>2.8.0</version></dependency>... </dependencies> ... <!-- configure the aspectj-maven-plugin to compile-time weave (CTW) the aws-lambda-powertools-java aspects into your project --><!-- Note: This AspectJ configuration is not needed when using the functional approach --><build><plugins>... <plugin><groupId>dev.aspectj</groupId><artifactId>aspectj-maven-plugin</artifactId><version>1.14</version><configuration><source>11</source><!-- or higher --><target>11</target><!-- or higher --><complianceLevel>11</complianceLevel><!-- or higher --><aspectLibraries><aspectLibrary><groupId>software.amazon.lambda</groupId><artifactId>powertools-logging</artifactId></aspectLibrary></aspectLibraries></configuration><dependencies><dependency><groupId>org.aspectj</groupId><artifactId>aspectjtools</artifactId><!-- AspectJ compiler version, in sync with runtime --><version>1.9.22</version></dependency></dependencies><executions><execution><goals><goal>compile</goal></goals></execution></executions></plugin>... </plugins></build>
plugins{id'java'id'io.freefair.aspectj.post-compile-weaving'version'8.1.0'// Not needed when using the functional approach}repositories{mavenCentral()}dependencies{aspect'software.amazon.lambda:powertools-logging:2.8.0'// Not needed when using the functional approachimplementation'software.amazon.lambda:powertools-logging-log4j:2.8.0'}sourceCompatibility=11targetCompatibility=11
1 2 3 4 5 6 7 8 910111213141516
plugins{id'java'id'io.freefair.aspectj.post-compile-weaving'version'8.1.0'// Not needed when using the functional approach}repositories{mavenCentral()}dependencies{aspect'software.amazon.lambda:powertools-logging:2.8.0'// Not needed when using the functional approachimplementation'software.amazon.lambda:powertools-logging-logback:2.8.0'}sourceCompatibility=11targetCompatibility=11
Powertools for AWS Lambda (Java) simply extends the functionality of the underlying library you choose (log4j2 or logback). You can leverage the standard configuration files (log4j2.xml or logback.xml):
Log level is generally configured in the log4j2.xml or logback.xml. But this level is static and needs a redeployment of the function to be changed. Powertools for AWS Lambda permits to change this level dynamically thanks to an environment variable POWERTOOLS_LOG_LEVEL.
We support the following log levels (SLF4J levels): TRACE, DEBUG, INFO, WARN, ERROR. If the level is set to CRITICAL (supported in log4j but not logback), we revert it back to ERROR. If the level is set to any other value, we set it to the default value (INFO).
When you want to set a logging policy to drop informational or verbose logs for one or all AWS Lambda functions, regardless of runtime and logger used.
When enabled, you should keep your own log level and ALC log level in sync to avoid data loss.
Here's a sequence diagram to demonstrate how ALC will drop both INFO and DEBUG logs emitted from Logger, when ALC log level is stricter than Logger.
sequenceDiagram participant Lambda service participant Lambda function participant Application Logger Note over Lambda service: AWS_LAMBDA_LOG_LEVEL="WARN" Note over Application Logger: POWERTOOLS_LOG_LEVEL="DEBUG" Lambda service->>Lambda function: Invoke (event) Lambda function->>Lambda function: Calls handler Lambda function->>Application Logger: logger.error("Something happened") Lambda function-->>Application Logger: logger.debug("Something happened") Lambda function-->>Application Logger: logger.info("Something happened") Lambda service--xLambda service: DROP INFO and DEBUG logs Lambda service->>CloudWatch Logs: Ingest error logs
Priority of log level settings in Powertools for AWS Lambda¶
We prioritise log level settings in this order:
AWS_LAMBDA_LOG_LEVEL environment variable
POWERTOOLS_LOG_LEVEL environment variable
level defined in the log4j2.xml or logback.xml files
If you set POWERTOOLS_LOG_LEVEL lower than ALC, we will emit a warning informing you that your messages will be discarded by Lambda.
Note
With ALC enabled, we are unable to increase the minimum log level below the AWS_LAMBDA_LOG_LEVEL environment variable value, see AWS Lambda service documentation for more details.
As shown in the example above, you can use arguments (with StructuredArguments) without placeholders ({}) in the message. If you add the placeholders, the arguments will be logged both as an additional field and also as a string in the log message, using the toString() method.
You can also combine structured arguments with non structured ones. For example:
1
LOGGER.info("Processing order {}",order.getOrderId(),entry("order",order));
1 2 3 4 5 6 7 8 910111213
{"level":"INFO","message":"Processing order 23542","service":"payment","timestamp":"2023-12-01T14:49:19.293Z","xray_trace_id":"1-6569f266-4b0c7f97280dcd8428d3c9b5","order":{"orderId":23542,"amount":459.99,"date":"2023-12-01T14:49:19.018Z","customerId":328496}}
Do not use reserved keys in StructuredArguments
If the key name of your structured argument matches any of the standard structured keys or any of the additional structured keys, the Logger will log a warning message and ignore the key. This is to protect you from accidentally overwriting reserved keys such as the log level or Lambda context information.
Using MDC
Mapped Diagnostic Context (MDC) is essentially a Key-Value store. It is supported by the SLF4J API, logback and log4j (known as ThreadContext). You can use the following standard:
MDC.put("key", "value");
Custom keys stored in the MDC are persisted across warm invocations
Always set additional keys as part of your handler method to ensure they have the latest value, or explicitly clear them with clearState=true.
Do not add reserved keys to MDC
Avoid adding any of the keys listed in standard structured keys and additional structured keys to your MDC. This may cause unindented behavior and will overwrite the context set by the Logger. Unlike with StructuredArguments, the Logger will not ignore reserved keys set via MDC.
Logger is commonly initialized in the global scope. Due to Lambda Execution Context reuse, this means that custom keys, added with the MDC can be persisted across invocations. You can clear state using clearState=true on the @Logging annotation, or use the functional API which handles cleanup automatically.
clearState is based on MDC.clear(). State clearing is automatically done at the end of the execution of the handler if set to true.
Tip
When using the functional API with PowertoolsLogging.withLogging(), state is automatically cleared at the end of execution, so you don't need to manage it manually.
When debugging in non-production environments, you can log the incoming event using the @Logging annotation with the logEvent parameter, via the POWERTOOLS_LOGGER_LOG_EVENT environment variable, or manually with the functional API.
Warning
This is disabled by default to prevent sensitive info being logged.
When debugging in non-production environments, you can log the response using the @Logging annotation with the logResponse parameter, via the POWERTOOLS_LOGGER_LOG_RESPONSE environment variable, or manually with the functional API.
Warning
This is disabled by default to prevent sensitive info being logged.
If you use this on a RequestStreamHandler, Powertools must duplicate output streams in order to log them when used together with the @Logging annotation.
By default, AWS Lambda logs any uncaught exception that might happen in the handler. However, this log is not structured and does not contain any additional context. When using the @Logging annotation, you can enable structured exception logging with logError param or via POWERTOOLS_LOGGER_LOG_ERROR env var.
Warning
This is disabled by default to prevent double logging.
Note
This feature is only available when using the @Logging annotation. When using the functional API, you must catch and log exceptions manually using try-catch blocks.
importorg.slf4j.MarkerFactory;publicclassAppLogErrorimplementsRequestHandler<APIGatewayProxyRequestEvent,APIGatewayProxyResponseEvent>{privatestaticfinalLoggerLOGGER=LoggerFactory.getLogger(AppLogError.class);publicAPIGatewayProxyResponseEventhandleRequest(finalAPIGatewayProxyRequestEventinput,finalContextcontext){returnPowertoolsLogging.withLogging(context,()->{try{// ...returnnewAPIGatewayProxyResponseEvent().withStatusCode(200);}catch(Exceptione){LOGGER.error(MarkerFactory.getMarker("FATAL"),"Exception in Lambda Handler",e);throwe;}});}}
Log buffering enables you to buffer logs for a specific request or invocation. Enable log buffering by configuring the BufferingAppender in your logging configuration. You can buffer logs at the WARNING, INFO or DEBUG level, and flush them automatically on error or manually as needed.
This is useful when you want to reduce the number of log messages emitted while still having detailed logs when needed, such as when troubleshooting issues.
importorg.slf4j.Logger;importorg.slf4j.LoggerFactory;importsoftware.amazon.lambda.powertools.logging.Logging;// ... other importspublicclassPaymentFunctionimplementsRequestHandler<APIGatewayProxyRequestEvent,APIGatewayProxyResponseEvent>{privatestaticfinalLoggerLOGGER=LoggerFactory.getLogger(PaymentFunction.class);@LoggingpublicAPIGatewayProxyResponseEventhandleRequest(finalAPIGatewayProxyRequestEventinput,finalContextcontext){LOGGER.debug("a debug log");// this is bufferedLOGGER.info("an info log");// this is not buffered// do stuff// Buffer is automatically cleared at the end of the method by @Logging annotationreturnnewAPIGatewayProxyResponseEvent().withStatusCode(200);}}
When configuring log buffering, you have options to fine-tune how logs are captured, stored, and emitted. You can configure the following parameters in the BufferingAppender configuration:
Parameter
Description
Configuration
maxBytes
Maximum size of the log buffer in bytes
int (default: 20480 bytes)
bufferAtVerbosity
Minimum log level to buffer
DEBUG (default), INFO, WARNING
flushOnErrorLog
Automatically flush buffer when ERROR or FATAL level logs are emitted
true (default), false
Logger Level Configuration
To use log buffering effectively, you must set your logger levels to the same level as bufferAtVerbosity or more verbose for the logging framework to capture and forward logs to the BufferingAppender. For example, if you want to buffer DEBUG level logs and emit INFO+ level logs directly, you must:
Set your logger levels to DEBUG in your log4j2.xml or logback.xml configuration
Set POWERTOOLS_LOG_LEVEL=DEBUG if using the environment variable (see Log level section for more details)
If you want to sample INFO and WARNING logs but not DEBUG logs, set your log level to INFO and bufferAtVerbosity to WARNING. This allows you to define the lower and upper bounds for buffering. All logs with a more severe level than bufferAtVerbosity will be emitted directly.
1 2 3 4 5 6 7 8 910111213141516171819202122
<?xml version="1.0" encoding="UTF-8"?><Configuration><Appenders><Consolename="JsonAppender"target="SYSTEM_OUT"><JsonTemplateLayouteventTemplateUri="classpath:LambdaJsonLayout.json"/></Console><BufferingAppendername="BufferedJsonAppender"maxBytes="20480"bufferAtVerbosity="WARNING"><AppenderRefref="JsonAppender"/></BufferingAppender></Appenders><Loggers><!-- Intentionally set to DEBUG to forward all logs to BufferingAppender --><Loggername="com.example"level="debug"additivity="false"><AppenderRefref="BufferedJsonAppender"/></Logger><Rootlevel="debug"><AppenderRefref="BufferedJsonAppender"/></Root></Loggers></Configuration>
1 2 3 4 5 6 7 8 910111213141516
publicclassPaymentFunctionimplementsRequestHandler<APIGatewayProxyRequestEvent,APIGatewayProxyResponseEvent>{privatestaticfinalLoggerLOGGER=LoggerFactory.getLogger(PaymentFunction.class);@LoggingpublicAPIGatewayProxyResponseEventhandleRequest(finalAPIGatewayProxyRequestEventinput,finalContextcontext){LOGGER.warn("a warning log");// this is bufferedLOGGER.info("an info log");// this is bufferedLOGGER.debug("a debug log");// this is buffered// do stuff// Buffer is automatically cleared at the end of the method by @Logging annotationreturnnewAPIGatewayProxyResponseEvent().withStatusCode(200);}}
importsoftware.amazon.lambda.powertools.logging.PowertoolsLogging;publicclassPaymentFunctionimplementsRequestHandler<APIGatewayProxyRequestEvent,APIGatewayProxyResponseEvent>{privatestaticfinalLoggerLOGGER=LoggerFactory.getLogger(PaymentFunction.class);@LoggingpublicAPIGatewayProxyResponseEventhandleRequest(finalAPIGatewayProxyRequestEventinput,finalContextcontext){LOGGER.debug("a debug log");// this is buffered// do stufftry{thrownewRuntimeException("Something went wrong");}catch(RuntimeExceptionerror){LOGGER.error("An error occurred",error);// Logs won't be flushed here}// Manually flush buffered logsPowertoolsLogging.flushBuffer();returnnewAPIGatewayProxyResponseEvent().withStatusCode(200);}}
Disabling flushOnErrorLog will not flush the buffer when logging an error. This is useful when you want to control when the buffer is flushed by calling the flush method manually.
You can manually control the log buffer using the PowertoolsLogging utility class, which provides a backend-independent API that works with both Log4j2 and Logback:
1 2 3 4 5 6 7 8 91011121314151617
importsoftware.amazon.lambda.powertools.logging.PowertoolsLogging;publicclassPaymentFunctionimplementsRequestHandler<APIGatewayProxyRequestEvent,APIGatewayProxyResponseEvent>{privatestaticfinalLoggerLOGGER=LoggerFactory.getLogger(PaymentFunction.class);@LoggingpublicAPIGatewayProxyResponseEventhandleRequest(finalAPIGatewayProxyRequestEventinput,finalContextcontext){LOGGER.debug("Processing payment");// this is bufferedLOGGER.info("Payment validation complete");// this is buffered// Manually flush all buffered logsPowertoolsLogging.flushBuffer();returnnewAPIGatewayProxyResponseEvent().withStatusCode(200);}}
1 2 3 4 5 6 7 8 91011121314151617
importsoftware.amazon.lambda.powertools.logging.PowertoolsLogging;publicclassPaymentFunctionimplementsRequestHandler<APIGatewayProxyRequestEvent,APIGatewayProxyResponseEvent>{privatestaticfinalLoggerLOGGER=LoggerFactory.getLogger(PaymentFunction.class);@LoggingpublicAPIGatewayProxyResponseEventhandleRequest(finalAPIGatewayProxyRequestEventinput,finalContextcontext){LOGGER.debug("Processing payment");// this is bufferedLOGGER.info("Payment validation complete");// this is buffered// Manually clear buffered logs without outputting themPowertoolsLogging.clearBuffer();returnnewAPIGatewayProxyResponseEvent().withStatusCode(200);}}
Available methods:
PowertoolsLogging.flushBuffer() - Outputs all buffered logs and clears the buffer
PowertoolsLogging.clearBuffer() - Discards all buffered logs without outputting them
Use the @Logging annotation to automatically flush buffered logs when an uncaught exception is raised in your Lambda function. This is enabled by default (flushBufferOnUncaughtError = true), but you can explicitly configure it if needed.
Warning
This feature is only available when using the @Logging annotation. When using the functional API, you must manually flush the buffer in exception handlers.
1 2 3 4 5 6 7 8 910111213
publicclassPaymentFunctionimplementsRequestHandler<APIGatewayProxyRequestEvent,APIGatewayProxyResponseEvent>{privatestaticfinalLoggerLOGGER=LoggerFactory.getLogger(PaymentFunction.class);@Logging(flushBufferOnUncaughtError=true)publicAPIGatewayProxyResponseEventhandleRequest(finalAPIGatewayProxyRequestEventinput,finalContextcontext){LOGGER.debug("a debug log");// this is buffered// do stuffthrownewRuntimeException("Something went wrong");// Logs will be flushed here}}
1 2 3 4 5 6 7 8 910111213141516171819
importsoftware.amazon.lambda.powertools.logging.PowertoolsLogging;publicclassPaymentFunctionimplementsRequestHandler<APIGatewayProxyRequestEvent,APIGatewayProxyResponseEvent>{privatestaticfinalLoggerLOGGER=LoggerFactory.getLogger(PaymentFunction.class);publicAPIGatewayProxyResponseEventhandleRequest(finalAPIGatewayProxyRequestEventinput,finalContextcontext){returnPowertoolsLogging.withLogging(context,()->{try{LOGGER.debug("a debug log");// this is buffered// do stuffthrownewRuntimeException("Something went wrong");}catch(Exceptione){PowertoolsLogging.flushBuffer();// Manually flush buffered logsthrowe;}});}}
Does the buffer persist across Lambda invocations? No, each Lambda invocation has its own buffer. The buffer is initialized when the Lambda function is invoked and is cleared after the function execution completes or when flushed manually.
Are my logs buffered during cold starts (INIT phase)? No, we never buffer logs during cold starts. This is because we want to ensure that logs emitted during this phase are always available for debugging and monitoring purposes. The buffer is only used during the execution of the Lambda function.
How can I prevent log buffering from consuming excessive memory? You can limit the size of the buffer by setting the maxBytes option in the BufferingAppender configuration. This will ensure that the buffer does not grow indefinitely.
What happens if the log buffer reaches its maximum size? Older logs are removed from the buffer to make room for new logs. This means that if the buffer is full, you may lose some logs if they are not flushed before the buffer reaches its maximum size. When this happens, we emit a warning when flushing the buffer to indicate that some logs have been dropped.
How is the log size of a log line calculated? The log size is calculated based on the size of the log line in bytes. This includes the size of the log message, any exception (if present), the log line location, additional keys, and the timestamp.
What timestamp is used when I flush the logs? The timestamp is the original time when the log record was created. If you create a log record at 11:00:10 and flush it at 11:00:25, the log line will retain its original timestamp of 11:00:10.
What happens if I try to add a log line that is bigger than max buffer size? The log will be emitted directly to standard output and not buffered. When this happens, we emit a warning to indicate that the log line was too big to be buffered.
What happens if Lambda times out without flushing the buffer? Logs that are still in the buffer will be lost.
How does the BufferingAppender work with different appenders? The BufferingAppender is designed to wrap arbitrary appenders, providing maximum flexibility. You can wrap console appenders, file appenders, or any custom appenders with buffering functionality.
You can dynamically set a percentage of your logs toDEBUG level to be included in the logger output, regardless of configured log level, using thePOWERTOOLS_LOGGER_SAMPLE_RATE environment variable, via the samplingRate attribute on the @Logging annotation, or as a parameter in the functional API.
Info
Configuration via environment variable is given precedence over sampling rate configuration, provided it's in valid value range.
1 2 3 4 5 6 7 8 910
publicclassAppimplementsRequestHandler<APIGatewayProxyRequestEvent,APIGatewayProxyResponseEvent>{privatestaticfinalLoggerLOGGER=LoggerFactory.getLogger(App.class);@Logging(samplingRate=0.5)publicAPIGatewayProxyResponseEventhandleRequest(finalAPIGatewayProxyRequestEventinput,finalContextcontext){// will eventually be logged based on the sampling rateLOGGER.debug("Handle payment");}}
1 2 3 4 5 6 7 8 9101112
publicclassAppimplementsRequestHandler<APIGatewayProxyRequestEvent,APIGatewayProxyResponseEvent>{privatestaticfinalLoggerLOGGER=LoggerFactory.getLogger(App.class);publicAPIGatewayProxyResponseEventhandleRequest(finalAPIGatewayProxyRequestEventinput,finalContextcontext){returnPowertoolsLogging.withLogging(context,0.5,()->{// will eventually be logged based on the sampling rateLOGGER.debug("Handle payment");returnnewAPIGatewayProxyResponseEvent().withStatusCode(200);});}}
You can go further and customize which fields you want to keep in your logs or not. The configuration varies according to the underlying logging library.
You can create your own template and leverage the PowertoolsResolver and any other resolver to log the desired fields with the desired format. Some examples of customization are given below:
Utility by default emits timestamp field in the logs in format yyyy-MM-dd'T'HH:mm:ss.SSS'Z' and in system default timezone. If you need to customize format and timezone, you can update your template.json or by configuring log4j2.component.properties as shown in examples below:
Utility by default emits timestamp field in the logs in format yyyy-MM-dd'T'HH:mm:ss.SSS'Z' and in system default timezone. If you need to customize format and timezone, you can change use the following:
Utility also supports Elastic Common Schema(ECS) format. The field emitted in logs will follow specs from ECS together with field captured by utility as mentioned above.