API is degraded
Resolved
Jul 27 at 08:07am PDT
We are back!
Affected services
Updated
Jul 27 at 07:38am PDT
We need to update our Kubernetes node pools to restore service. In order to achieve this, we are strategically reducing the amount of pods Firecrawl uses in order to reduce the number of nodes that need updating. This means that Firecrawl will run worse for a short period of time, and full functionality will be restored earlier. Thank you for your patience.
Affected services
Updated
Jul 27 at 05:56am PDT
We are seeing some intermittent timeout failures. We are investigating.
Affected services
Updated
Jul 26 at 08:39pm PDT
All endpoints have recovered. We are working to investigate further.
Affected services
Updated
Jul 26 at 06:01pm PDT
Scrape has recovered, other endpoints are still affected. We are observing large-scale cutouts in TCP traffic between our workers and our Redis instance in long-lived connections. The cause is not yet known.
Affected services
Updated
Jul 26 at 05:24pm PDT
The issue has regressed again
Affected services
Updated
Jul 26 at 05:13pm PDT
The issue is resolved. We are still monitoring the situation
Affected services
Updated
Jul 26 at 05:06pm PDT
The issue has regressed. We are still working to clear up the queue and resolv it.
Affected services
Updated
Jul 26 at 04:44pm PDT
The issue is resolved.
Affected services
Created
Jul 26 at 04:29pm PDT
We are currently investigating an issue where some scrapes are timing out.
Affected services