We've identified a large spike in an internal resource (EBS volume on an API server) that caused a general state of unavailability. We solved the underlying EC2 issue and we're actively working with AWS support to mitigate further incidents. Data back-fill process is progressing as expected, and we anticipate no data loss.
Posted about 1 month ago. Aug 14, 2018 - 14:48 UTC
We're investigating an issue in our internal API pipeline that caused an outage. Customers may experience error rates or unavailability. We've identified the problem and the service is now back online, and customer data is being back-filled.
Posted about 1 month ago. Aug 14, 2018 - 12:41 UTC
This incident affected: Web Application, APIs, and Metrics Ingest Pipeline.