Microsoft Azure Storage and Database Part 30 – Azure Table Storage – Overview
Hope you all are doing good !!! 🙂 .
In our previous article we have discussed on how can we Store And Process Messages In Azure Queue. Today in this article we will discuss about the Overview Of Azure Table Storage service.
Tool Installation Articles :
- Configure Azure Command Line Interface ( Azure CLI) On Windows
- Configure PowerShell For Microsoft Azure Az Module On Windows
Previous Azure Series :
- Learn Basics Of Azure Networking In 60 Hours
- Learn Basic Of Azure Active Directory And Azure Identity And Access Management
- Azure DevOps – Learn at one place
If you have missed our previous articles on Azure Storage and Database Series, please check it in following links.
Azure Table storage
Azure Table storage is a sub service of Azure storage account Service and this service allows store large amounts of structured, non-relational data. It has following features.
- Cloud-based NoSQL datastore
- Highly scalable and very cheep
- Store structured or semi-structured data which is highly available
- Store non-relational data
- Offers a schema less design
- Accepts authenticated calls from inside and outside of the Azure cloud
- One storage account may contain any number of tables, up to the capacity limit of the storage account.
- Quickly querying data using a clustered index.
- Create apps that require a flexible data schema
- Create massively scalable apps
- Perform OData-based queries
- Use JSON to serialize data
- The format of a base URI for accessing the Table service is https://<myStorageaccount>.table.core.windows.net
Azure Table Storage Structure
When we are talking about Azure Table storage, one structure is coming into our mind. Basically it is with all other storage options (Blob, File, Queue ) as well but in case of Table storage is is little different. Following components are the part of the Azure Table storage structure.
Storage Accounts : As with other storage options, when accessing Azure Table storage directly, everything is managed through our Storage Account. But when we are using Cosmos DB, the access is managed through our Table API account.
So there are two types of tables Storage services available in Azure. The first one is Azure table storage, and the second one is a premium version, which is under Cosmos’s DB. So if we are looking for a brilliant performance with low latency, then we should go for Cosmos’s DB, particularly when we are dealing with mission-critical applications. If we can compromise on performance and latency but we want to optimize the cost, then we should go for table storage.
Table : Tables operate the same for both Azure Table storage and the Table API. These tables are collections of entities without schemas. As we know, tables don’t put a schema on entities that mean a single table can contain entities with different set of properties.
Entity : An entity is a set of properties, like a database row. The size of the entity in Azure Storage can be up to 1 MB in size. Also, the size of the entity in Azure Cosmos DB can be up to 2MB in size.
Properties : It is a name-value pair where each entity can include up to 252 properties to store the data, and in addition to our custom properties, there are also following system properties
- Partition Key : The Partition key is a unique identifier for the partition within a given table, specified by the
- Row Key : The Row key is a unique identifier for an entity within a given partition, specified by the
- Timestamp : The Timestamp property is a Date Time value that is maintained on the server side to record the time an entity was last modified.
So every entity will have those above three properties as a default. And when we are querying the data, we can carry the data based on the Partition key and Row key. So when we query the data, we query the data with a Partition key and Row key. Generally, when we are fetching the entity from a single partition, it will be rapid because all the objects belong to a the partition will be stored in one server in the background within Azure.
URL format : When using Azure tables, we can access data directly through the following addresses. This access is based on the OData protocol.
Azure Table storage : http://<storage account>.table.core.windows.net/<table>
Azure Cosmos DB Table API : http://<storage account>.table.cosmosdb.azure.com/<table>
Comparison Between Azure Table Storage And Azure Cosmos DB Table API
While Cosmos DB Table API and Azure Table storage can both provide similar functionality, the two services are not identical. Below you can learn how these services differ and the capacities of each.
When using Azure Table storage there is no upper bound on the latency of our operations. In contrast, Cosmos DB limits read/write latency to under 10 milliseconds.
With Azure Table, our throughput is limited to 20k operations per second while with Cosmos DB throughput is supported for up to 10 million operations per second. Additionally, Cosmos DB provides automatic indexing of properties. This can be used during querying to increase performance.
We can use Azure Table in a single region with a secondary, read-only region for increased availability. In contrast, with Cosmos DB we can distribute our data across up to 30 regions. Automatic, global failover is included and we can choose between five consistency levels for our desired combination of throughput, latency, and availability.
WE can use the same API with both Azure Table and Cosmos DB. There are also software development kits (SDKs) available for use with a generic REST API. However, with Cosmos DB a superset of functionality exists that we can use for additional methods. Because the API is shared, we can easily transfer data between Azure Table and Cosmos DB.
Billing in Table storage is determined by our storage volume use. Pricing is per GB and affected by our selected redundancy level. The more GB you use, the cheaper our pricing. We are also charged according to the number of operations you perform, per 10k operations.
Billing in Cosmos DB is determined by the number of throughput request units (RUs). Our database is provisioned in increments of 100RU per second and we are billed hourly for any units used. We are also billed for storage per GB at a higher rate than Table storage.
When we access data from our storage account, our client makes a request over HTTP/HTTPS to Azure Storage. Every request to a secure resource must be authorized, so that the service ensures that the client has the permissions required to access the requested data. The following list showing the options that Azure Table Storage offers for authorizing access to data.
Shared Key (storage account key) ==> Supported
- Shared access signature (SAS) ==> Supported
- Azure Active Directory (Azure AD) ==> Supported (Preview)
- On-premises Active Directory Domain Services ==> Not Supported
- Anonymous public read access ==> Not Supported
When To Use Azure Table Storage
We can consider Table storage, when we have following scenarios,
- Storing TBs of structure data
- Data can be deformalized
- No need for joins, schema
- Quick Queries with clustered index
- OData queries support
- JSON serializable data
- Server less requirement
- Web applications
- Simple logging
- Metadata and configuration store
Scale-Up Target For Table Storage
The following table describes capacity, scalability, and performance targets for Table storage.
|Number of tables in an Azure storage account||Limited only by the capacity of the storage account|
|Number of partitions in a table||Limited only by the capacity of the storage account|
|Number of entities in a partition||Limited only by the capacity of the storage account|
|Maximum size of a single table||500 TiB|
|Maximum size of a single entity, including all property values||1 MiB|
|Maximum number of properties in a table entity||255 (including the three system properties, PartitionKey, RowKey, and Timestamp)|
|Maximum total size of an individual property in an entity||Varies by property type. For more information, see Property Types in Understanding the Table Service Data Model.|
|Size of the PartitionKey||A string up to 1 KiB in size|
|Size of the RowKey||A string up to 1 KiB in size|
|Size of an entity group transaction||A transaction can include at most 100 entities and the payload must be less than 4 MiB in size. An entity group transaction can include an update to an entity only once.|
|Maximum number of stored access policies per table||5|
|Maximum request rate per storage account||20,000 transactions per second, which assumes a 1-KiB entity size|
|Target throughput for a single table partition (1 KiB-entities)||Up to 2,000 entities per secon|
With the above information, I am concluding this article. I hope this is informative to you. Please let me know if I missed anything important or if my understanding is not up to the mark. Keep reading, share your thoughts, experiences. Feel free to contact us to discuss more.
If you have any suggestion / feedback / doubt, you are most welcome. Stay tuned on Knowledge-Junction, will come up with more such articles.
Thanks for reading 🙂 .