This incident has been resolved.
Oct 13, 13:07 EEST
Since ~8:20 UTC we see all our services running normally again. We are still seeing some performance instabilities in the network of our provider but it should not have an impact on our services because we have enough redundancy elsewhere. We are continuing to monitor the situation closely.
Oct 13, 12:06 EEST
Meanwhile some services are slowly coming back. We regained access to one of our regional storage mirrors, which is enough to have all services fully functional again (though with still some performance impact). Also, we are still seeing some instabilities - most likely because the backbone routes are coming back up step-by-step now and so there's still an overload as not enough link capacity back online yet. We continue monitoring the situtation.
Oct 13, 11:33 EEST
Unfortunately, the problem at our upstream provider still persists. From what we've heard they have misconfigured their core routers due to human error which caused their entire backbone to go down and resulting in a global outage. We currently cannot access any of our cloud object storages in any region which has a major impact on our transcoder service but impacts the CDN as well.
Oct 13, 11:09 EEST
We are currently experiencing a major outage of our main upstream cloud provider. It's a total outage of all their systems and all regions. We suspect it might be a major issue related to their entire network because the outage is global. But we don't know more yet.
Oct 13, 10:30 EEST