Coveralls is in Read Only mode while we work on updating the system. Sorry for the inconvenience.

VIGILANCE! Check this page any time you notice a problem with coveralls

Monitoring - A fix has been implemented and we are monitoring the results.
Jun 12, 2025 - 20:37 PDT
Coveralls.io Web ? Degraded Performance
Coveralls.io API ? Degraded Performance
GitHub Operational
Travis CI API Operational
Pusher Presence channels Operational
Pusher WebSocket client API Operational
Stripe API Operational
New Relic Metric API : US Operational
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Average Web/API Response Time
Fetching
Coverage Calculation Background Job Dequeue Time ?
Fetching
Jun 15, 2025

No incidents reported today.

Jun 14, 2025

No incidents reported.

Jun 13, 2025

No incidents reported.

Jun 12, 2025
Resolved - This incident has been resolved.
Jun 12, 18:18 PDT
Update - We are continuing to monitor for any further issues.
Jun 12, 13:13 PDT
Monitoring - A fix has been implemented and we are monitoring the results.
Jun 12, 11:24 PDT
Identified - The issue has been identified and a fix is being implemented.
Jun 12, 06:19 PDT
Jun 11, 2025
Resolved - This incident has been resolved.
Jun 11, 10:09 PDT
Monitoring - We have identified an incident that slowed or delayed processing for a set of builds from 3-6a PDT. We have resolved the issue, are scaling servers and manually clearing the backlog. We will be monitoring until clear.
Jun 11, 06:48 PDT
Jun 10, 2025

No incidents reported.

Jun 9, 2025

No incidents reported.

Jun 8, 2025

No incidents reported.

Jun 7, 2025

No incidents reported.

Jun 6, 2025

No incidents reported.

Jun 5, 2025

No incidents reported.

Jun 4, 2025

No incidents reported.

Jun 3, 2025
Resolved - All builds from today (Jun 3) have been processed. As background job queues cleared, build times returned to normal. We will continue monitoring.
Jun 3, 21:11 PDT
Update - We took the following action to more quickly restore normal build times for all _new_ builds today:

- We moved all unfinished background jobs from yesterday (Jun 2) into holding queues in order to restore normal build times for new builds from today (Jun 3).

- We scaled resources to more quickly drain the existing queues of jobs from new builds from today (Jun 3).

We will monitor progress on all new builds and provide updates here until we're fully caught up (zero (0) background jobs in queue).

Thanks for your patience in the meantime as we restore the best possible performance to the service.

Jun 3, 10:34 PDT
Monitoring - We are continuing to clear a backlog of background processing jobs for builds submitted in the past 18-24 hrs. While all systems are operational, there will continue to be latency on build times until we clear all background job queues, which are FIFO. Current estimate: 1-hour. We will post updates here until build times return to normal.
Jun 3, 08:33 PDT
Resolved - This incident has been resolved, but we will continue monitoring closely.

All systems are operational, but we will leave systems category at Degraded Performance until we have fully cleared a backlog of background processing jobs.

Jun 3, 03:52 PDT
Update - We have implemented another fix and are monitoring the results.
Jun 2, 20:05 PDT
Update - We are continuing to monitor for any further issues.
Jun 2, 17:50 PDT
Monitoring - A partial fix has been implemented and we are monitoring the results.
Jun 2, 15:54 PDT
Update - We are continuing to work on a fix for this issue.
Jun 2, 15:05 PDT
Identified - The issue has been identified and a fix is being implemented.
Jun 2, 13:48 PDT
Investigating - While monitoring we have discovered some additional planner anomalies that are slowing down queries associated with our various calculation jobs.

We are investigating those again and working to identify and implement a fix.

We will continue posting updates here.

Jun 2, 12:10 PDT
Update - All systems operational. We are carefully scaling resources and monitoring database performance to ensure stable recovery.

Some delays in build and coverage report processing may still be observed as we restore full capacity.

Thank you for your continued patience — we’ll share further updates as recovery progresses.

Jun 2, 09:51 PDT
Update - We have completed implementation of our fix. We are cautiously resuming background processing and will continue monitoring closely. If you notice any delays in build processing, rest assured they will be resolved shortly.

Thank you for your patience — more updates will follow as we return to full capacity.

Jun 2, 09:08 PDT
Update - We’re currently experiencing an outage due to unexpected query planner behavior following our recent upgrade to PostgreSQL 16.

Despite extensive preparation and testing, one of our core background queries began performing full table scans under the new version, causing a rapid increase in load and job backlog.

What we're doing:

- We’ve paused background job processing to stabilize the system.
- We tried all "quick fixes" like adjustments to DB params that affect planner choices—all to no effect.
- We're now actively deploying a targeted database index to resolve the performance issue.
- We’ve identified a longer-term fix that will make the query safer and more efficient on the new version of PostgreSQL.

Why this happened:

PostgreSQL 16 introduced changes to how certain types of queries are planned. A query that performed well in PostgreSQL 12 unexpectedly triggered a much more expensive plan in 16. We're correcting for that now.

Estimated recovery:

Background job processing is expected to resume within 20–40 minutes, with full service restoration shortly thereafter.

We’ll continue to post updates here as we make progress. Thanks for your patience — we’re on it.

Jun 2, 08:55 PDT
Monitoring - A fix has been implemented and we are monitoring the results.
Jun 2, 08:34 PDT
Update - We are continuing to work on a fix for this issue.
Jun 2, 08:06 PDT
Update - We are continuing to work on a fix for this issue.
Jun 2, 07:52 PDT
Update - We need to pause processing momentarily to clear a backlog of DB connections. We cutover to a new database version this weekend and even after months of planning and preventative steps, during periods of elevated usage after such a change it's still common for planner regressions to occur. We will identify the offending SQL statements, fix their planner issues, and restart work as soon as possible. Thanks for your pateince as we work though this as quickly as possible.
Jun 2, 07:20 PDT
Identified - The issue has been identified and a fix is being implemented.
Jun 2, 07:18 PDT
Investigating - We are currently investigating this issue.
Jun 2, 06:42 PDT
Jun 2, 2025
Jun 1, 2025
Completed - The scheduled maintenance has been completed.
Jun 1, 20:09 PDT
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
Jun 1, 19:00 PDT
Scheduled - We will be undergoing scheduled maintenance during this time.
Jun 1, 18:55 PDT
Completed - The scheduled maintenance has been completed.
Jun 1, 03:00 PDT
In progress - Scheduled maintenance is currently in progress. We will provide updates as necessary.
May 31, 22:00 PDT
Scheduled - We will be undergoing scheduled maintenance during this time.
May 28, 11:06 PDT