All Systems Operational
Modelbit Ohio (app.modelbit.com) Operational
90 days ago
99.99 % uptime
Today
Web Application and API ? Operational
90 days ago
100.0 % uptime
Today
Running Models ? Operational
90 days ago
99.98 % uptime
Today
Modelbit Mumbai (ap-south-1.modelbit.com) Operational
90 days ago
99.99 % uptime
Today
Web Application and API ? Operational
90 days ago
100.0 % uptime
Today
Running Models ? Operational
90 days ago
99.99 % uptime
Today
Modelbit Virginia (us-east-1.modelbit.com) Operational
90 days ago
99.99 % uptime
Today
Web Application and API ? Operational
90 days ago
100.0 % uptime
Today
Running Models ? Operational
90 days ago
99.99 % uptime
Today
Operational
Degraded Performance
Partial Outage
Major Outage
Maintenance
Major outage
Partial outage
No downtime recorded on this day.
No data exists for this day.
had a major outage.
had a partial outage.
Modelbit Ohio: Model Latency ?
Fetching
Modelbit Ohio: Web Latency ?
Fetching
Modelbit Mumbai: Model Latency ?
Fetching
Modelbit Mumbai: Web Latency ?
Fetching
Modelbit Virginia: Model Latency ?
Fetching
Modelbit Virginia: Web Latency ?
Fetching
Past Incidents
May 20, 2024

No incidents reported today.

May 19, 2024

No incidents reported.

May 18, 2024

No incidents reported.

May 17, 2024

No incidents reported.

May 16, 2024

No incidents reported.

May 15, 2024

No incidents reported.

May 14, 2024

No incidents reported.

May 13, 2024

No incidents reported.

May 12, 2024

No incidents reported.

May 11, 2024

No incidents reported.

May 10, 2024

No incidents reported.

May 9, 2024

No incidents reported.

May 8, 2024

No incidents reported.

May 7, 2024

No incidents reported.

May 6, 2024
Resolved - A bug in Modelbit allowed a customer with a slow GPU-requiring model to, when sending a large batch of inferences, consume all available GPUs. This resulted in all other customers timing out inferences while Modelbit processed the batch of inferences. The team is fixing this issue now.
May 6, 19:41 UTC
Update - Inferences are running normally at present, but the team is still investigating the cause of the spike in queue length and timeouts.
May 6, 19:33 UTC
Investigating - We are currently investigating long inference request queues leading to delayed inferences and timeouts.
May 6, 19:23 UTC