-
Events
-
Customer Portal
-
Customer Engagement Centre (Previously known as CSVu)
-
General Form Guidance
-
Editing Forms
-
Benefit Forms
-
Revenues Forms
-
'How to...' Sessions
-
Health (CHC)
-
Technical Area
-
Open Process
-
Forms
-
Blue Badge and Concessionary Travel
-
Waste Services
-
Awards and Grants
-
Social Care Financial Assessments
-
IEG4 Team Updates
-
BACAS
-
Tender Responses - General
-
Internal Process Guides
-
Public Protection
-
Built Environment
How is high availability achieved for SQL Azure databases?
The goal of the High Availability architecture in Azure SQL Database is to guarantee that databases are up and running 99.99% of time within a data centre region. A region is a set of data centres deployed within a latency-defined perimeter and connected through a dedicated regional low-latency network.
The high availability solution is designed to ensure that committed data is never lost due to failures, that maintenance operations do not affect the system, and that the database will not be a single point of failure in the software architecture. There are no maintenance windows or downtimes that should require the application to be stopped while the database is upgraded or maintained.
The availability model includes two layers:
- A stateless compute layer that runs the sqlserver.exe process and contains only transient and cached data on the attached SSD, such as TempDB, model database, plan cache, buffer pool and column store pool. This stateless node is operated by Azure Service Fabric that initializes sqlserver.exe, controls health of the node, and performs failover to another node if necessary.
- A stateful data layer with the database files (.mdf/.ldf) that are stored in Azure Blob storage. Azure blob storage has built-in data availability and redundancy feature. It guarantees that every record in the log file or page in the data file will be preserved even if SQL Server process crashes.
Whenever the database engine or the operating system is upgraded, or a failure is detected, Azure Service Fabric will automatically move the stateless SQL Server process to another stateless compute node with sufficient free capacity. Data in Azure Blob storage is not affected by the move, and the data/log files are attached to the newly initialized SQL Server process. This process guarantees 99.99% availability, but an application under heavy load may experience some performance degradation during the transition since the new SQL Server instance starts with a cold cache.